Monday, 21 August 2023

Searching Through Netflix: The Foggy Clouds Of Confabulation

 

I turned on Netflix last night, and a flood of suggested TV programmes poured onto my screen. This got me thinking about Netflix's algorithms, which produce a kind of 'recommendation engine' to serve us suggestions on what to watch. Naturally, the algorithmic potential is enhanced if it knows what we've previously watched. But it also obviously isn't the case that our taste patterns can simply be demarcated into singular categories like thrillers, dramas, comedy, science fiction, fantasy, horror, and what have you. Many films we like and dislike are a combination of those genres.

And then there are many other complex considerations related to our tastes, like which actors and actresses are in the programmes, where and when they are set, when during the year they are watched, what mood we are in at the time, how tired we are, who we are watching it with, how hungry we are, etc. Further, how many stunts, fights, car chases, relationship arguments, sex scenes have you watched recently? Did you read the reviews beforehand? Did the film have in the cast a male/female you find attractive? Was there a school, university or hospital? Did it contain anything supernatural? Was there a twist at the end? Was the ending felt to be ambiguous in a good or bad way, etc? The list goes on, and the number of elements that contribute to our feelings about movies and TV shows far exceeds our ability to apprehend them and their causal relationship to our tastes.

But the optimisation devices like the ones Netflix uses to delineate components of this matrix (they are formally called singular value decomposition tools) isn't constrained by our own lack of knowingness - it can take all the raw data and formulate matrices that act as a simulation of what we ourselves might choose if we could process all that information. So, in one sense, it understands our viewing preferences even better than we do, but in the other sense, it is only producing non-sentient interpretations of patterns that can throw up ultra-sophisticated guesswork, without being able to properly capture how accurate those patterns are. In other words, it can get to the 'what' a lot more easily than it can understand the 'why'.

This has parallels with what is known in psychology as 'confabulation' - it's the errors we make about ourselves when our interpretive and memorial cognitive protocols become fuzzy or distorted. Just as Netflix's algorithms can spew out correlative artefacts for programme suggestions - which we know are based on fairly good data, while never being able to fully capture the complexity of the individual matrices - so too do our own minds confabulate a personal worldview and belief systems, the provenance of which is beyond our full understanding. In other words, we really don't have the mental and emotional artillery to discern the true sources, inspirations and motivations of our views and beliefs, even though we can often quite easily articulate rational justification for them.

Consequently, if each of us has a complex set of views and beliefs that make up our aggregated worldview, it's actually epistemic humility, a proper sense of scale and perspective, and awareness of our need to be humble with regard to how far we've travelled and how much further there is to go that will best help us advance through the foggy clouds of confabulation. Additionally, a more acute perspective of our own understanding (as per the Dunning-Kruger effect) is better for our conversations too.

In a counterintuitive way, it is continual awareness of our limit that leads us ever-quicker to the horizons of advancement, because we aren't making any injudicious short cuts, and we are not so hampered by pride and ego. We are simply in a state of contentment in the graduated exposure to new vistas and broader terrain. To that end, one of the greatest impediments to self-advancement is found in the gulf between who we really are and who we tell ourselves and others we are to preserve a persona.

 

No comments:

Post a Comment

/>