Home>>read Dating-ish (Knitting in the City #6) free online

Dating-ish (Knitting in the City #6)(26)

By:Penny Reid


"Ha!"

"Look," he turned toward his monitor, "I thought we'd start here. This is a scatterplot of women in their thirties, displaying trends of responses. And you can see here how the responses are clustered, giving us prototypical subsets. Four main types of respondents exist represented by four different colors. Now, down here, below. You can see how the responses to our interview are also clustered, except the colors are mixed."



       
         
       
        

"What does that mean?"

"That means a woman's demographics and responses via the dating website data-which determine the original cluster-doesn't allow us to predict how she will respond to our interview, and therefore what she most values in a partner."

"Is that bad?"

He tilted his head back and forth in a considering motion. "No. Not bad. There's not really a bad. Just surprising."

Matt continued showing me scatterplot graphs, analyses, some raw-de-identified-data, all the while munching on my macaroons. I didn't detect any of his previous baiting and belligerence from two weeks ago. Perhaps the cookies had affected his change in attitude. Or maybe he really did want my questionnaire data very badly. Whatever the reason, I was relieved by his easy-going manner.

He showed me how his team was attempting to create personality algorithms for their AI, dependent on how a woman responded to the interview. It was fascinating, and I wasn't sure I comprehended all of it, but by the time we were wrapping up, my brain was exhausted.

"We're not pursuing a DeepMind AI, not yet. Emotional intelligence is our primary aim."

"DeepMind? What's that?" I glanced up from my notes.

"That's-well, how do I explain this-that's Google's AI." His expression became conflicted. "It's . . . well, it's advanced. And the simulations they've run so far have shown fascinating-if not disturbing-results, none of which have been peer-review published as of yet."

"What do you mean, disturbing?"

"It becomes aggressive when faced with competing resources, but cooperative when it's in DeepMind's best interest to be cooperative," he said starkly. "It wasn't taught that behavior, DeepMind learned it. Self-taught."

"Interesting."

"Right. Our prototype won't learn to protect itself from harm, or compete for resources. It won't be self-serving, like DeepMind. We've specifically designed it to eschew ego."

"But without ego, will it have self-worth?"

"No," he responded simply.

I frowned, wincing slightly. "Don't you think that's a bad idea?"

"Why?" He looked curious.

"I mean, the implications for people, humans, who own this robot, assuming you meet your aims, are somewhat concerning. People who choose this robot as a companion, as a life partner, won't have any demands placed upon them. They'll never have to be unselfish."

"Exactly." Matt acted as though I'd just answered my own question. 

"No. Not exactly," I argued, feeling deep down that the idea of creating substitutes for humans that were devoid of self-worth was dangerous. "What if people start mistreating their robots? Purposefully?"

"Mistreating a robot?" Matt echoed, as though I'd spoken a different language, and then a sly grin spread over his features. "You mean like, pushing its buttons? Get it?"

I had a hard time fighting my smile at his goofiness. "No. I mean-"

"Or playing something other than its favorite music, which everyone knows is heavy metal."

I groaned, laughing and shaking my head. "Oh wow. That was impressive."

"Thank you, thank you." As he examined my face, his smile deepened and his eyes warmed, as though he was both surprised and pleased by my laughter. "Sorry for interrupting, I just have a million robot jokes and no one lets me tell them."

"You can tell them to me, anytime."

"Good to know." He nodded slowly, inspecting me with his lingering smile, like I was something different. We swapped stares for a few protracted seconds, during which I admired how humor, being funny on purpose, did something wonderful for his features.

Eventually, he shook himself, clearing his throat and nodding once deferentially. "I'm sorry, I interrupted you. You were saying, about mistreating robots."

"Oh, yes. What about ethics? Have you or any of your colleagues considered developing a regulatory board or oversight system for the treatment of robots or AI?"

Matt flinched back, his eyes wide, and stared at me like I was nuts. "No. Why would there be?"