I don't see why Apple would want to admit that such a thing is possible on their devices (if it is). If they can remove the auto-erase then surely it's only a matter of time before someone else works out how to do it?
So...they don't share the method. They just stop it deleting the data.In my mind, that would be a risk that goes back to my original post. If Apple share a method, there's the risk of the government using it whenever they can justify it and all of the future risks of weak morals that go with that. If they just implement something, it is possible that could be reverse engineered by analysing what has been changed. Might be hard to do, or it might be very easy to spot that.
So...they don't share the method. They just stop it deleting the data.
In my mind, that would be a risk that goes back to my original post. If Apple share a method, there's the risk of the government using it whenever they can justify it and all of the future risks of weak morals that go with that. If they just implement something, it is possible that could be reverse engineered by analysing what has been changed. Might be hard to do, or it might be very easy to spot that.
They are saying that is impossible now. What FBI wants is for them to make a new operating system which has a backdoor, i.e build a system which can remove encryption.I don't see why Apple would want to admit that such a thing is possible on their devices (if it is). If they can remove the auto-erase then surely it's only a matter of time before someone else works out how to do it?
There is no doubt in my mind that if Apple wanted to access what was on the phone, then they could. I don't see why they are under the impression that this automatically means everyone else in the world will start hacking iPhones using the same trick.
All the idiots stood outside Apple stores in the USA proudly sticking up for their favourite brand should ask themselves this question - if someone in their family was taken hostage and the only way to save them was by Apple opening someone's phone for the FBI, what would their stance be then? For the record I own an iPhone and iPad, I think their products are fantastic but I disagree with their stubborn approach here. Having said all of this I am no tech expert, I just cannot believe that there is no way to get in this guy's phone without compromising every other iPhone and its software.
This kind of unlikely hypothetical situation can be used to justify anything and everything. If a family member was taken hostage, would I support the government arresting an entire neighborhood of people where the kidnapper was suspected to be living, in order to get my loved one back? Yes, probably. Doesn't make it right.There is no doubt in my mind that if Apple wanted to access what was on the phone, then they could. I don't see why they are under the impression that this automatically means everyone else in the world will start hacking iPhones using the same trick.
All the idiots stood outside Apple stores in the USA proudly sticking up for their favourite brand should ask themselves this question - if someone in their family was taken hostage and the only way to save them was by Apple opening someone's phone for the FBI, what would their stance be then? For the record I own an iPhone and iPad, I think their products are fantastic but I disagree with their stubborn approach here. Having said all of this I am no tech expert, I just cannot believe that there is no way to get in this guy's phone without compromising every other iPhone and its software.
Maybe you shouldn't go around accusing people of being idiots if you don't understand the issue then?
This kind of unlikely hypothetical situation can be used to justify anything and everything. If a family member was taken hostage, would I support the government arresting an entire neighborhood of people where the kidnapper was suspected to be living, in order to get my loved one back? Yes, probably. Doesn't make it right.
If I had to have a bet on it I'd back myself, put it that way. Good publicity for Apple though, any publicity is good publicity etc.... Apple fan bois aren't normal people anyway they won't have looked at it rationally, I've admitted I could be wrong but in their eyes Apple is God.
Yes, but that's why a victim's family are not allowed to be involved in investigations or decide what is appropriate for law enforcement in that situation. The same reason why you wouldn't allow them to be in the jury, for example. When it comes to a loved one you'd be willing to do almost anything to have them back safe, including things you absolutely shouldn't be allowed to do.I used it because it's true. The same applies to Tim Cook, he'd have his hacking handbook out in a flash it affected him personally.
I think Apple's stance is probably the opposite of good publicity. People, like yourself, who don't seem to really understand the debate are giving them an awful lot of stick for it. It only seems to be the tech community, a very small subset of Apple's customer base as a whole, or people that consider the longer-term ramifications of what it would mean that are sticking up for them.
People that are giving them stick are generally people that want the FBI to have access to the evidence to try and prevent future terrorist attacks. It may be a simple way of looking at it but it's true. I've read a lot on this and none of it has changed my opinion, clearly you are the same so fair play.
Assuming you believe the FBI are utterly benevolent and will only use this power for good (a highly tenuous assumption, but hey) Apple is a multinational company and sells its phones and tablets in many territories, if it complies with a US court order it then has to comply with a Chinese order, or a Russian order, and so on and so forth. At a trans-national level, iPhone security becomes worthless. This should worry the FBI in itself, because currently iPhone encryption is considered good enough that iPhone's are permitted to be used in the US government. Creating a back-door to iPhone's encryption is, therefore, a greater national security risk than not doing so.
Whilst this might be a 'national security' vs 'privacy' issue in this instance, the framing of the debate in general as such is completely disingenuous. It is a national security vs national security dispute.
1, 3, and 5 are nailed on to happen. Could see them going after college students to trace drug-users and catch dealers. Or employees of companies suspected to employ illegal immigrants. As you say, the list is endless.Speculation time. Just brainstorming some other ways that the FBI under immoral, unstable or corrupting influences (*cough cough Trump*) could use this development to their advantage.
Acquire and backdoor the phone of....
- a journalist to access details on confidential sources
- a political rival and use it to gain leverage or advantage
- a foreign diplomat for spying purposes
- a civilian in a position of influence for blackmail purposes
- a protestor/civil rights activist/whatever to gain information on their contacts
etc etc. I'm sure the list is endless.
1, 3, and 5 are nailed on to happen. Could see them going after college students to trace drug-users and catch dealers. Or employees of companies suspected to employ illegal immigrants. As you say, the list is endless.
There is no doubt in my mind that if Apple wanted to access what was on the phone, then they could. I don't see why they are under the impression that this automatically means everyone else in the world will start hacking iPhones using the same trick.
All the idiots stood outside Apple stores in the USA proudly sticking up for their favourite brand should ask themselves this question - if someone in their family was taken hostage and the only way to save them was by Apple opening someone's phone for the FBI, what would their stance be then? For the record I own an iPhone and iPad, I think their products are fantastic but I disagree with their stubborn approach here. Having said all of this I am no tech expert, I just cannot believe that there is no way to get in this guy's phone without compromising every other iPhone and its software.
The problem is a few things:
1 - even if you do produce a backdoor in extreme circumstances; a six-digit numeric code can be cracked in about 1 day, but a complex alpha-numeric password could take over 10 years due to all the possible combination of numbers and letters.
2 - each individual phone has its own hardware key that's created as part of the chip, Apple doesn't store records of this and in order to extract the data off the chip, you'd have to melt the plastic off it and use lasers to hopefully (because theres no guarantee in each case) recover bits of it.
3 - there's no guarantee that having the data would have prevented San Bernardino and similar attacks happening, and there's no guarantee it will prevent future attacks from happening either.
4 - there's no guarantee that it can be just used on a case-by-case basis, and if put in the wrong hands the technique could be used to unlock hundreds of millions of data, bank information, classified reports etc. and there's no guarantee it can be traced after the fact because the whole purpose is to be able to extract the data and not have it traced back.
Speculation time. Just brainstorming some other ways that the FBI under immoral, unstable or corrupting influences (*cough cough Trump*) could use this development to their advantage.
Acquire and backdoor the phone of....
- a journalist to access details on confidential sources
- a political rival and use it to gain leverage or advantage
- a foreign diplomat for spying purposes
- a civilian in a position of influence for blackmail purposes
- a protestor/civil rights activist/whatever to gain information on their contacts
etc etc. I'm sure the list is endless.
Can you imagine the temptation? You could imagine a foreign diplomat of a larger nation being able to get around it with alternative tech, but ordinary citizens? Just ready to be plucked. If Apple were compelled to get access to legacy hardware too then you could catch anyone who might be careful now but wasn't in the past, as long as you could find their phone. I know have three or four old phones right here somewhere that I've hung on to or haven't sold.
Edit: Just thought of another one. Coming to America on holiday or to work? Let's scan your phone. Every single visitor to the states, take their private info. Why the hell not sure. Make it a visa requirement while you're at it. We're all criminals in one way or another I'm sure
1, 3, and 5 are nailed on to happen. Could see them going after college students to trace drug-users and catch dealers. Or employees of companies suspected to employ illegal immigrants. As you say, the list is endless.
You only need to see the leaked documents of former head of FBI J Edgar Hoover to see that even at a time where technology was no where near as advanced as this, whre the FBI and similar organisations employed illegal tactics in order to bring down organisations and individuals.
If they have access to a master key of every iPhone throughout the world, then there's no telling what they will do and who they will monitor. And the temptation for hackers will be far too extreme.
Number 3 is the furthest from new. The FBI literally runs traditional spying on diplomats they suspect to actually be foreign intelligence agents, and have done so for decades.
There was a pretty huge scandal at the time regarding Glenn Greenwald's partner (Ed Snowden scandal) being detained in the UK under circumstances pretty similar to your edit.
Miranda was released, but officials confiscated electronics equipment including his mobile phone, laptop, camera, memory sticks, DVDs and games consoles.
"It's almost impossible, even without full knowledge of the case, to conclude that Glenn Greenwald's partner was a terrorist suspect.
"I think that we need to know if any ministers knew about this decision, and exactly who authorised it."
"The clause in this act is not meant to be used as a catch-all that can be used in this way."
Schedule 7 of the Terrorism Act has been widely criticised for giving police broad powers under the guise of anti-terror legislation to stop and search individuals without prior authorisation or reasonable suspicion – setting it apart from other police powers.
http://digg.com/2015/why-mass-surveillance-cant-wont-and-never-has-stopped-a-terrorist
Why Mass Surveillance Can't, Won't, And Never Has Stopped A Terrorist
In his latest bestseller, Data and Goliath, world-renowned security expert and author Bruce Schneier goes deep into the world of surveillance, investigating how governments and corporations alike monitor nearly our every move. In this excerpt, Schneier explains how we are fed a false narrative of how our surveillance state is able to stop terrorist attacks before they happen. In fact, Schneier argues, the idea that our government is able to parse all the invasive and personal data they collect on us is laughable. The data-mining conducted every day only seems to take valuable resources and time away from the tactics that should be used to fight terrorism.
The NSA repeatedly uses a connect-the-dots metaphor to justify its surveillance activities. Again and again — after 9/11, after the Underwear Bomber, after the Boston Marathon bombings — government is criticized for not connecting the dots. However, this is a terribly misleading metaphor. Connecting the dots in a coloring book is easy, because they’re all numbered and visible. In real life, the dots can only be recognized after the fact.
That doesn’t stop us from demanding to know why the authorities couldn’t connect the dots. The warning signs left by the Fort Hood shooter, the Boston Marathon bombers, and the Isla Vista shooter look obvious in hindsight. Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” Humans are natural storytellers, and the world of stories is much more tidy, predictable, and coherent than reality. Millions of people behave strangely enough to attract the FBI’s notice, and almost all of them are harmless. The TSA’s no-fly list has over 20,000 people on it. The Terrorist Identities Datamart Environment, also known as the watch list, has 680,000, 40% of whom have “no recognized terrorist group affiliation.”
Data mining is offered as the technique that will enable us to connect those dots. But while corporations are successfully mining our personal data in order to target advertising, detect financial fraud, and perform other tasks, three critical issues make data mining an inappropriate tool for finding terrorists.
The first, and most important, issue is error rates. For advertising, data mining can be successful even with a large error rate, but finding terrorists requires a much higher degree of accuracy than data-mining systems can possibly provide.
Data mining works best when you’re searching for a well-defined profile, when there are a reasonable number of events per year, and when the cost of false alarms is low. Detecting credit card fraud is one of data mining’s security success stories: all credit card companies mine their transaction databases for spending patterns that indicate a stolen card. There are over a billion active credit cards in circulation in the United States, and nearly 8% of those are fraudulently used each year. Many credit card thefts share a pattern — purchases in locations not normally frequented by the cardholder, and purchases of travel, luxury goods, and easily fenced items — and in many cases data-mining systems can minimize the losses by preventing fraudulent transactions. The only cost of a false alarm is a phone call to the cardholder asking her to verify a couple of her purchases.
Terrorist plots are different, mostly because whereas fraud is common, terrorist attacks are very rare. This means that even highly accurate terrorism prediction systems will be so flooded with false alarms that they will be useless.
The reason lies in the mathematics of detection. All detection systems have errors, and system designers can tune them to minimize either false positives or false negatives. In a terrorist-detection system, a false positive occurs when the system mistakenly identifies something harmless as a threat. A false negative occurs when the system misses an actual attack. Depending on how you “tune” your detection system, you can increase the number of false positives to assure you are less likely to miss an attack, or you can reduce the number of false positives at the expense of missing attacks.
Because terrorist attacks are so rare, false positives completely overwhelm the system, no matter how well you tune. And I mean completely: millions of people will be falsely accused for every real terrorist plot the system finds, if it ever finds any.
We might be able to deal with all of the innocents being flagged by the system if the cost of false positives were minor. Think about the full-body scanners at airports. Those alert all the time when scanning people. But a TSA officer can easily check for a false alarm with a simple pat-down. This doesn’t work for a more general data-based terrorism-detection system. Each alert requires a lengthy investigation to determine whether it’s real or not. That takes time and money, and prevents intelligence officers from doing other productive work. Or, more pithily, when you’re watching everything, you’re not seeing anything.
The US intelligence community also likens finding a terrorist plot to looking for a needle in a haystack. And, as former NSA director General Keith Alexander said, “you need the haystack to find the needle.” That statement perfectly illustrates the problem with mass surveillance and bulk collection. When you’re looking for the needle, the last thing you want to do is pile lots more hay on it. More specifically, there is no scientific rationale for believing that adding irrelevant data about innocent people makes it easier to find a terrorist attack, and lots of evidence that it does not. You might be adding slightly more signal, but you’re also adding much more noise. And despite the NSA’s “collect it all” mentality, its own documents bear this out. The military intelligence community even talks about the problem of “drinking from a fire hose”: having so much irrelevant data that it’s impossible to find the important bits.
We saw this problem with the NSA’s eavesdropping program: the false positives overwhelmed the system. In the years after 9/11, the NSA passed to the FBI thousands of tips per month; every one of them turned out to be a false alarm. The cost was enormous, and ended up frustrating the FBI agents who were obligated to investigate all the tips. We also saw this with the Suspicious Activity Reports —or SAR — database: tens of thousands of reports, and no actual results. And all the telephone metadata the NSA collected led to just one success: the conviction of a taxi driver who sent $8,500 to a Somali group that posed no direct threat to the US — and that was probably trumped up so the NSA would have better talking points in front of Congress.
The second problem with using data-mining techniques to try to uncover terrorist plots is that each attack is unique. Who would have guessed that two pressure-cooker bombs would be delivered to the Boston Marathon finish line in backpacks by a Boston college kid and his older brother? Each rare individual who carries out a terrorist attack will have a disproportionate impact on the criteria used to decide who’s a likely terrorist, leading to ineffective detection strategies.
The third problem is that the people the NSA is trying to find are wily, and they’re trying to avoid detection. In the world of personalized marketing, the typical surveillance subject isn’t trying to hide his activities. That is not true in a police or national security context. An adversarial relationship makes the problem much harder, and means that most commercial big data analysis tools just don’t work. A commercial tool can simply ignore people trying to hide and assume benign behavior on the part of everyone else. Government data-mining techniques can’t do that, because those are the very people they’re looking for.
Adversaries vary in the sophistication of their ability to avoid surveillance. Most criminals and terrorists — and political dissidents, sad to say — are pretty unsavvy and make lots of mistakes. But that’s no justification for data mining; targeted surveillance could potentially identify them just as well. The question is whether mass surveillance performs sufficiently better than targeted surveillance to justify its extremely high costs. Several analyses of all the NSA’s efforts indicate that it does not.
The three problems listed above cannot be fixed. Data mining is simply the wrong tool for this job, which means that all the mass surveillance required to feed it cannot be justified. When he was NSA director, General Keith Alexander argued that ubiquitous surveillance would have enabled the NSA to prevent 9/11. That seems unlikely. He wasn’t able to prevent the Boston Marathon bombings in 2013, even though one of the bombers was on the terrorist watch list and both had sloppy social media trails — and this was after a dozen post-9/11 years of honing techniques. The NSA collected data on the Tsarnaevs before the bombing, but hadn’t realized that it was more important than the data they collected on millions of other people.
This point was made in the 9/11 Commission Report. That report described a failure to “connect the dots,” which proponents of mass surveillance claim requires collection of more data. But what the report actually said was that the intelligence community had all the information about the plot without mass surveillance, and that the failures were the result of inadequate analysis.
Mass surveillance didn’t catch underwear bomber Umar Farouk Abdulmutallab in 2006, even though his father had repeatedly warned the U.S. government that he was dangerous. And the liquid bombers (they’re the reason governments prohibit passengers from bringing large bottles of liquids, creams, and gels on airplanes in their carry-on luggage) were captured in 2006 in their London apartment not due to mass surveillance but through traditional investigative police work. Whenever we learn about an NSA success, it invariably comes from targeted surveillance rather than from mass surveillance. One analysis showed that the FBI identifies potential terrorist plots from reports of suspicious activity, reports of plots, and investigations of other, unrelated, crimes.
This is a critical point. Ubiquitous surveillance and data mining are not suitable tools for finding dedicated criminals or terrorists. We taxpayers are wasting billions on mass-surveillance programs, and not getting the security we’ve been promised. More importantly, the money we’re wasting on these ineffective surveillance programs is not being spent on investigation, intelligence, and emergency response: tactics that have been proven to work. The NSA's surveillance efforts have actually made us less secure.
Right, but Apple's point, and the one the supporters of the FBI are glossing over is that this technology currently doesn't exist (whether it does or not is immaterial). Once you create this technology you've opened pandoras box.
Assuming you believe the FBI are utterly benevolent and will only use this power for good (a highly tenuous assumption, but hey) Apple is a multinational company and sells its phones and tablets in many territories, if it complies with a US court order it then has to comply with a Chinese order, or a Russian order, and so on and so forth. At a trans-national level, iPhone security becomes worthless. This should worry the FBI in itself, because currently iPhone encryption is considered good enough that iPhone's are permitted to be used in the US government. Creating a back-door to iPhone's encryption is, therefore, a greater national security risk than not doing so.
This is not even to consider the effects it will have on lower level crime, if a back door exists it will be found. iPhone security then effectively becomes worthless, cybercriminals will be able to bypass security, the very terrorist groups that this back-door 'protects' us from will be able to bypass it, in the process of having this tech for the instances where it may be useful you create thousands more instances where it is actively harmful.
Whilst this might be a 'national security' vs 'privacy' issue in this instance, the framing of the debate in general as such is completely disingenuous. It is a national security vs national security dispute.
Even if, in this instance, there is a way for Apple to access the phone without creating a back door in all iPhones (and there may well be considering that we're talking about, I believe, a 5c which is a pretty old bit of kit) the precedent that it sets for Apple to comply with these orders is as bad as if they've compromised the security of all of them.
But if China asked the same I think they'd just say "nah".
Might not be down to them though. You could easily imagine forms of blackmail that China could use to obtain the technology from the government.
This kind of unlikely hypothetical situation can be used to justify anything and everything. If a family member was taken hostage, would I support the government arresting an entire neighborhood of people where the kidnapper was suspected to be living, in order to get my loved one back? Yes, probably. Doesn't make it right.
Good post. Also interesting that although the court decisions are public I believe, Apple chose to bring this to the front page. Which is a separate decision from fighting it or not, as it could've been appealed discretely.
Also, realistically I think companies comply much more with orders in their home country, the US if they're not american, but not very often abroad. Brazil tried to get FB to hand them some WhatsApp messages a while back in a drug investigation, I think FB basically played dead. The home country and the US is where people are most worried about getting in trouble. The Telecom companies if I'm not mistaken helped the NSA set up much of its surveillance, even abroad. But if China asked the same I think they'd just say "nah".
For those interested, here's an excerpt from a fantastic book on the fallacy of large scale data mining for terror prevention:
Good article.For those interested, here's an excerpt from a fantastic book on the fallacy of large scale data mining for terror prevention:
Last night I dreamed that Apple did this.That assistance includes disabling the phone's auto-erase function, which activates after 10 consecutive unsuccessful passcode attempts, and helping investigators to submit passcode guesses electronically.
If they get the phone from a suspect means the terrorism attack was already donePeople that are giving them stick are generally people that want the FBI to have access to the evidence to try and prevent future terrorist attacks. It may be a simple way of looking at it but it's true. I've read a lot on this and none of it has changed my opinion, clearly you are the same so fair play.
This article is spot on.
http://www.theverge.com/2016/2/19/1...=article&utm_medium=social&utm_source=twitter
I don't know how old you are but no one knows where they will be in 20/30 years. In that time, info that you want private could be used to ruin your life.Misusing it in what way though? In my case, the only thing they could do me for is smoking the occasional doobie, and I'm guessing for 99% of the population it'd be similar. Which I honestly don't think they'll use all this for. But even then, I think I would be ok with it, as long as the main aim is to get that 1% who is really up to no good and it helps them make some progress on this part. For me personally safety edges privacy. I know I see it a bit simplistic but that's how I feel.