June 9, 2018

Link: WWDC 2018 Idle Thoughts ☍

Gabe Weatherhead:

In all truth, I feel much better about my status as an Apple fan after the 2018 keynote. In many ways, it felt like an acknowledgement of two major categories of buyers: The young and the old. It’s fine to joke about MeMoji and the silliness of 32 person FaceTime parties but that’s the stuff that will pull in younger users. There are very few 11 year olds that want sudo access to their shell. But then we get things like dark mode and audio APIs for the watch. That’s clearly targeting us old crusty folks that like to shake sticks at things.

Any time that Apple introduces some new, “fun” features to iOS, a lot of tech writers complain that it doesn’t appeal to them, so it’s a sign the company is unfocused and missing a step. Instead, Apple is finding ways to engage everyone, which has always been a goal for the company.

Link: On Paying for Software ☍

Seth Godin:

I like paying for my software when I’m buying it from a company that’s responsive, fast and focused. I like being the customer (as opposed to a social network, where I’m the product). I spend most of my day working with tools that weren’t even in science fiction novels twenty-five years ago, and the money I spend on software is a bargain–doing this work without it is impossible.

If I had a dollar for the times someone I knew proudly proclaimed that they only use free apps, I’d probably have enough money to quit my day job. Somehow, even for a few dollars, the majority of people don’t value quality software and would rather have annoying ads and privacy issues. For me, I’ll gladly spend a few dollars to try a promising app that was made with great care, and there are some companies that get a purchase from me no matter. In those cases, I’m a fan of their other works and want to support their business (not to mention, I’ll usually find a use for said product).

June 6, 2018

Link: Why a DNA Data Breach is Much Worse Than a Credit Card Leak ☍

Angela Chen for The Verge:

This week, DNA testing service MyHeritage revealed that hackers had breached 92 million of its accounts. Though the hackers only accessed encrypted emails and passwords — so they never reached the actual genetic data — there’s no question that this type of hack will happen more frequently as consumer genetic testing becomes more and more popular. So why would hackers want DNA information specifically? And what are the implications of a big DNA breach? […]

As the Equifax hack last year showed, there’s a lack of legislation governing what happens to data from a breach. And ultimately, a breach of genetic data is much more serious than most credit breaches. Genetic information is immutable: Vigna points out that it’s possible to change credit card numbers or even addresses, but genetic information cannot be changed. And genetic information is often shared involuntarily. “Even if I don’t use 23andMe, I have cousins who did, so effectively I may be genetically searchable,” says Ram. In one case, an identical twin having her genetic data sequenced created a tricky situation for her sister.

When all these “send us your DNA and we’ll tell you about yourself” services popped up, I was skeptical for a number of reasons, one being the overall lack of concern as a culture for any kind of data breaches. While leaked credit card information is frustrating, my issuers have been good about getting me replacements and not holding me liable. Social security numbers and now DNA are much more permanent and the Equifax hack has proven that nobody seems to care if your social security number is out there. Why would DNA be any different?

May 9, 2018

Link: It Doesn’t Look Like Anything to Me ☍

Alex Cranz for Gizmodo:

Google could soon have a feature that lets your phone impersonate people—because consumer-facing artificial intelligence isn’t terrifying enough. Called Duplex, it’s intended to make people’s lives easier by handling standard phone calls that are necessary, but not especially personal.

In examples Google demonstrated on stage during the I/O keynote, Google Assistant called a hair stylist to arrange an appointment and called a restaurant to get information about a reservation, using a voice that sounds a little less robotic than the standard Google Assistant (whether that voice is the user’s or a standard Google Duplex voice has not been made clear). And sure that’s ostensibly kind of neat. Intellectually speaking, I am very impressed with this technology! A voice that can contact human beings and impersonate them reasonably well, including using filler words like “um,” is a remarkable feat of AI engineering.

I also find the technology behind this quite impressive, but it also does give me a bit of an unsettled feeling. I understand the filler words are intended to give a more human feel, but it seems to be too much in the uncanny valley. More importantly, if it works as advertised, it’s that good that many people may not know it’s not human. Furthermore, we talk about technology addiction or communications suffering—what does having our phones make phone calls do for that dynamic? What if companies implement this technology and phase out call centers? While it might be better, you could also end up in a situation where the AI is just like the infuriating automated phone menus that take you in loops.

This isn’t sour grapes because Siri is lacking—if Siri could do this, I’d be equally creeped out. I think that there’s a lot of tech writers and pundits that think smart assistants are the next big thing, when many people may use them a bit more casually. I could be wrong, but right now, examples like this feel too wrong for me. On the other hand, there are excellent use cases in the realm of accessibility.

May 3, 2018

Link: If You Use Twitter, Change Your Password ☍

Twitter CTO Parag Agrawal:

When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

The original title for this post was “Keeping your account secure,” which is a fun way to avoid stating that passwords were made easily-accessible. I’ll yield Nick Heer’s take:

Interestingly enough, this was posted with the title “Keeping your account secure”, as opposed to a more accurate headline, like, “Oops, we stored your password in plain text”, or “We know the president’s password, for real”.

The euphemistic and misleading headline upsets me. What’s even more worrying is Agrawal’s reaction in a tweet:

We are sharing this information to help people make an informed decision about their account security. We didn’t have to, but believe it’s the right thing to do.

I have a problem with this because it makes it sound that this is the fat-free, low-carb, better-for-you option when phrased that way. Any business that holds user data and suffers some sort of data breach or mishandling has an obligation to those who have a relationship with it, to disclose that information in a reasonable amount of time. Although there’s always the debate with how social networks view users as a customer or product, I think Twitter did the only thing they should have done with disclosing this information.

I’ve already deleted an account on one social network this week, don’t make me consider a second, Twitter.