Story of Perspectives - Building a Bias Cloud

TL;DR: A succinct proposal of an approach toward building 'perspective cloud' - a platform for understanding cognitive biases and heuristics in content and user behavior.

This narrative provides a succinct proposal of an approach toward building ‘perspective cloud’. For sake of consistent nomenclature; we will continue to use the term ‘bias cloud’ which will collectively represent bias, heuristic, effect, affinity, disposition and any other behavioural traits discovered.

It is imperative to broadly sketch the contours of this story line and following picture attempts to visualise its essence. Section following this image provide further details of each constituent.

We briefly deliberated on the need to define a multi-pronged strategy that shall aid in building a comprehensive ‘bias cloud’ platform. We believe that this approach would give us necessary direction, identify individual constituents and collective stages of implementation. This is an initial draft of the proposal and is subject to changes based on review comments from all stakeholders.

Stage one - Survey

A paradigm shift that enable decision intelligence is to “explain what you want, not with instructions but with examples” [1]. This activity is an apt candidate for such an implementation.

  • In its formative stages, any notion of a bias cloud require good understanding of the subject which can aid in illustrating ability to discover/ map known biases for given content. As such, the primary task at hand is to conduct a literature survey to ensure that such materials are available in behavioural science/ psychology community. Access to this resource (free/ purchased) is a key ingredient of this endeavour.
  • Content referred here can be textual, image, audio, video or other consumable media format. We begin our topic modelling techniques with text-oriented analysis and add support for other media and language as we proceed.
  • The textual materials being envisaged here can come in the form of word association, frequency, strength or pattern. Examples accumulated should clearly indicate how to extrapolate word occurrences with bias. If we stumble upon a new methodology to derive bias inclinations from within content, sufficient usage clarity must follow.
  • In the event of no such materials being made available, a transactional layer that will enable a community (hired, incentivised or participatory) must be provisioned at Organisation.One Engage to build essential corpus organically.
  • A transactional layer should also provide ability to record materials collected from external sources e.g. through behavioural science community when available. All such materials being collected shall be ascribed toward ‘stimuli source’.
  • A corresponding transactional layer should provide for mapping actions against ‘stimuli source’ leading to an ‘effect repository’. It might be noted that assigned traits might be specific to industry.
  • Optionally, if none of these are feasible, add ability for either content owner to assign biases during creation flow or a behavioural expert perform this operation at a global level periodically.
  • Data points to be considered;
    • Text/ audio/ video/ image content
    • Probability distribution of bias stimuli (n)
    • Reference for bias stimuli

Stage two - Learning

An effective learning strategy is an opportunity to “automate the ineffable” [1].

  • This segment dwell in to exploration of modelling techniques and a proposal to build state-of-the-art classification engines [2] for content (stimulus) and interaction (effect).
  • Any learning opportunity for ‘stimulus engine’ will treat ‘stimuli source’ as its data source. To begin with, we shall make an assumption of incoming data source structure which can perhaps provide a format definition which the materials must adhere to. If there are deviations at incoming data points, a suitable data preparator will be added to bridge differences.
  • The emergent model for ‘stimulus engine’ would allow us to query against newly created content that would help generate a bias stimulus matrix as prediction.

E.g.

StimuliAffinityMOE
Negativity bias35%0.26
Humour effect26%0.71
Illusion of validity1.7%0.1
  • In the absence of an ‘effect repository’, observations from bias stimulus matrix are recorded as-is against a given user interaction.
  • The ‘effect engine’ is meant to further augment bias associations thus improving accuracy. It would allow us to query newly occurring user interaction signals that would help generate a bias effect matrix as prediction. Such predictions should ideally be specific to vertical (but needs further discussion).
  • Predictions from ‘effect engine’ feed in to ‘nudge engine’ and ‘behaviour engine’

E.g.

EffectAffinityMOE
Herd mentality35%0.1
Confirmation bias12.5%0.02
Spontaneous8%0.71
  • Validation of bias associations derived through ‘effect engine’ require user consent. Once a threshold is hit for a given bias category, ‘nudge engine’ will provide necessary nudge guidance matrix that aids in dispatch of relevant psychometric questions for confirmation. For given set of users; the most relevant guidance measures should be surfaced.

Stage three - Application

  • The learning pipeline proposal attempts to elucidate assisted intelligence models that will bring us closer to our ‘behavioural memory’ vision.
  • However, it is essential to discover industry specific use cases with corresponding nomenclature to promote platform usage e.g. how do we use ‘bias distribution table’? which outcomes would benefit from such matrices?
  • It will also be helpful to prioritise bias theories being investigated at stage one to align with market expectations.
  • The theory to vehicle association must be explicitly defined based on product manifestation.
  • Correlation between academic and industry terminologies must be established. Provisioning a portal for industry experts to assign use cases to bias listing and explain relevance must be explored.
  • Data sink requirements that allow ingestion through define organisation ontology (termed as conversation unit), and cross-model behaviour approximation are topics that are application centric.

Stage four - Evaluation

  • Every successful conversation is considered as an experiment yielding certain observations. This segment would allow us to verify if a given model is functioning within desirable parameters.
  • A transactional layer should allow us to generate assessment question repository that aids in verifying bias assumptions.
  • A nudge system must be devised that has the ability to embed ‘assessment question’ along with a vehicle. Compliance process to capture explicit concurrence from content creator should be available.
  • The nudge system would ideally pick aptly worded embeds to invoke response from users.

References

[1] Talks from Cassie Kozyrkov - https://medium.com/@kozyrkov

[2] Spark Mlib classification algorithms - https://spark.apache.org/docs/2.2.0/mllib-naive-bayes.html