Since there are no currently active contests, we have switched Climate CoLab to read-only mode.
Learn more at
Skip navigation

Community Discussions

Common Visual Representation

Share conversation: Share via:

Dmytro Ivakhnenko

Nov 23, 2012


1 |
Share via:
Hi, In the MIT Collective Intelligence Handbook there's a short but meaningful phrase: "Schwartz (1995) [53] argues that groups will always perform worse than individuals unless there is some collaborative creation of a visual representation." IMHO, current mechanical adding up the proposals is highly inefficient. I would propose the creation of common visual representation. Each participant would draw the systemic diagram of his area of competence, and then these systemic diagrams would be added together. This collaborative process is well described and tested as the "Interlock Research" Chapters from the book by the author of Interlock Research are available at: (see chapter 6 in particular) Best wishes, Dmytro

Mark Hurych

Nov 25, 2012


2 |
Share via:
Dmytro, hi. ...YES, and... [warning, i am a trickster of sorts] We should welcome any viable inroads to global challenges as well as any workarounds that allow a more direct participation by ever greater numbers of individuals and groups. Very often a marginal or outlier concept may hold a key to greater success in terms of non-zero-sum progress for all. IMHO There are at least three concepts that will add to the over-all benefit for humanity's future as we address global challenges: 1) individual intrinsic motivation ("AMP"), 2) closed system function (toward self-integration), and 3) tribal leadership (elusive stage 5). 1) Autonomy, Mastery, Purpose: one's purpose beyond a prize or carrots or sticks; 2) 1:diverse parts, 2:emergent whole, 3:networked relationships, 4:self-integration; and 3) nudge each participant from "life sucks" to "my life sucks" to "I'm great" to "we're great" then to "life's great."

Rob Laubacher

Nov 25, 2012


3 |
Share via:
Hi Dmytro, Thanks for your interesting comment. We are currently working with a volunteer designer on a visual representation of the taxonomy now being used to define sub-problems in the Climate CoLab: It would be interesting to get the community involved with developing this representation. Have you (or other members) had experience with group creation of visual representations of complex problems? or with the Interlock Research approach you cite? If so, it could be very helpful to hear about these experiences. Thanks, Rob Laubacher For the Climate CoLab team

Sam Notsureyouneedthis

Apr 20, 2015


4 |
Share via:
Hi, The interlock thing does look promising. If you want collaboration then windows and text boxes are probably not the best approach. You want to see the structure (at least initially), smell where you are needed, head there, take on different perspectives/look at/through different lenses for different levels/aspects of the problems and contribute where you feel you can have an impact rather then diluting the conversation. So a nodal representation makes much more sense in my opinion. Also I have to say I am a little shocked to find so few match-making tools available here. My impression always was that the individual is not really instrumental in all this. It is combinations of individuals + tools (thought models, traits, skills, and physical tools) that give you the edge. E.g. Gates+Allen, Ibuka+Morita, Siemens+Halske. The other thing is that an exclusively text based system is severely limited for collaboration. That is the one thing we've been doing for a real long time now, and the kinds of solutions you are likely to find in a text based system are the ones we already know (by and large). There are plenty of things we cannot write about (easily) that may be important for getting different solutions. Granted it is not easy to set this up - but that is kind of the point here no? Imagine you would make a 3D wordle style representation of the issues using images or icon sets. You can communicate the info much faster than text based, and you can juggle with it better in your head (more items get factored in). You can press a key and it flips to color coded text. You press "+" and get a layer more detail close to your courser and connected nodes is shown. You press a different key and can manipulate the wordle shape. The software then suggests a couple of key word "lenses" that would match the new structure locally, etc. etc. You never ever get presented with an empty box. Minimum two most likely matches, one wildcard, and "reshuffle".