Free writing on trust, learning and adaptation

(Below are some unedited, purely speculative, unsubstantiated and open-ended ideas. The post is more of a question-raising exercise than anything. How this relates to music technology and interactivity is still not clear to me yet… but I’m sure it’s not far off!)

I’ve been thinking a little recently trusting technology, beginning with one the most ubiquitous pieces of software we engage with daily: our search engines. We use search engines these days as a way to find out new information, but at the same time, we use these kinds of tools also to outsource our previously held capacity to store information. Now, we are very good at knowing where to find information, but potentially less adept at storing this information ourselves.

The ease of use of our search engines allows us to quickly look up information and have it available to us within a split second. The practice of tracking down information has been so practiced and learned that it may have replaced our capacity to store information the way we used to. To me this signals a real shift to an extended man-machine interface in every day life. We have outsourced this capacity to machines.

So it then becomes a question of trust. If we rely so much on the capabilities of machines to retain the information we need for our daily tasks, should we not be more wary of the trust we put into them? How can we be sure they are providing us with the most relevant and targeted information we are seeking? Most of the time we are looking at only the first few search returns to our queries. When we click on a hyperlink that points towards a source purportedly containing the information we seek, we are taking a blind leap of faith that the search algorithm has captured the most relevant data.

One interesting observation however is that in trying to use the machines to serve our purposes, we end up second guessing our own methods by which we interact with them, thereby honing our own skills over time and through practice. We know our search engines are well-honed and accurate most of the time, which explains the reason why most searches we do will be discarded if they don’t return a relevant result on the first page, or even the first few results. We therefore adapt our behaviour in order to work from within the machine’s own constraints. If a particular search query doesn’t yield the appropriate results, it is up to us to amend our query, rather than the machine to interpret what we want.

In this way there’s a give and take of agency between man and machine. Over repeated use, we implicitly understand the rulesets imposed by the technologies we use, so we adapt our own behaviour to suit their idiosyncrasies and get the optimal results. Looked at from one perspective, this friction might be seen as something that needs to be avoided at all costs, leading to new ideas about ways of teaching the computer to anticipate and interpret our needs and thereby eliminating our need to modify our behaviour. The theory behind this way of thinking is that the machine should know what the human wants, interpreting their desires regardless of the mode of inquiry. This kind of artificial intelligence gathers information on a user in order to accurately predict the context of the next enquiry. But what if this friction was not something that was to empower the human user, but instead was something that caged the user into an already established modes of interaction?

If a piece of software is geared towards capturing as much information about the user’s past as a predictor of its future, as well as being able to cross-reference its past with the histories of what it deems to be like-minded humans, then does this not constrain the user to a definition of both its past and the past of other users, without acknowledging the our abilities to adapt? I’m starting to think that this is a really fundamental issue, and one that is pertinent to all of our interactions with such new technologies. By teaching machines to adapt dynamically to our needs, we are forcing them to predict the future only by our past and the pasts of others. More importantly perhaps, we might be teaching our machines that we ourselves do not adapt and change, which we know to be a fallacy.

The machine may well be considered intelligent, autonomous and adaptive by our standards, but where does this leave the conception of the human by the machine’s standards? Looked at in this light it might see ourselves as unadaptable, atemporal beings that do not learn, change and adapt our behaviour to suit our environment. Perhaps this is the future of our interactions with technology when we privilege technology that changes, adapts and moulds itself to suit us. By designing to empower the user, perhaps, through technology, we are restricting the future of our own abilities by our current conceptions of them, as encoded in our machines.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s