Monday, November 19, 2018
A couple of weeks back we discussed (Let’s Rise WITH the Robots) how tech might help people to become better learners. Interestingly, last week I got to use some very cool assistive tech myself – some very mainstream, some cutting-edge, both very powerful, in two separate engagements with learning projects in Silicon Valley . Here’s what I learned:
Use what you have (and you have a lot!)
Tuesday, I sat in on an excellent presentation skills workshop by long-term leaders in that market, Mandel Communications. Mandel focuses on the spoken word, and has helped literally tens of thousands of professionals get heard in the workplace, as well as to think and present in clear, impactful ways.
I really appreciated what the folks shared with me that day, as I definitely have much work to do on my communication clarity. My learning was really helped ahead of the workshop by download their coaching tool, ORAI, an AI-powered app that helps you to record yourself speaking and which then shoots back instant feedback on tone of voice, energy, clarity, filler words, and pace.
And wow – the feedback was pretty tough! Seems I use filler words – ‘ums’ and ‘ers’ like a Linebacker uses Motrin. Ouch. But the feedback also came complete with practical advice, based on the proprietary Mandel method, giving me tips on how to use pauses instead of fillers and which outlined a practice régime to improve.
This prep also meant I showed up at the workshop knowing I had work to do and set up to work hard and develop my capability on this key skill – my first tech enabled learning assist of the week.
In the actual workshop, the smartphones came out again and everyone got to see their progress using video on their own devices. How helpful (maybe cringe making) to empower everyone’s own device to leverage the power of seeing themselves present – and how motivating to see and record the improvement over the course of the day. You would be amazed at the before and after videos, personal progress captured.
Sure, video in presentation training is not new of course… but not long ago it necessitated a trailer full of equipment and the exchange of disks, producing output, let’s be honest, never likely to get referred to again by the learner. Putting all of that in my pocket made the training more engaging, more accessible and also deeply personal and dare I say Instagram-able.
The result being… PAUSE… highly beneficial. (Hey, I’m still working on it, OK?)
Is that really my brain in there?
By contrast, I then got exposed to a tech assist straight outta the Rube Goldberg handbook – let’s just say it’s not quite ready for Best Buy… but was still amazing and thought provoking.
That’s because I rocked up to the Hasso Plattner Institute of Design at Stanford to drop in on a fun experiment hosted by Lisa Kay Solomon using medical-grade brain Imaging technology to track attention, distraction, focus and cognitive overload by measuring electrical activity in the brain.
That’s all courtesy of the crazy-brilliant folks over at Uncommon Partners, who are pioneering the potential applicability of this approach to test and evaluate people’s reaction to VR technology. Carissa Carter, Director of Teaching and Learning at the d. School and myself were the ‘labrats’ – the device was strapped on and calibrated, then we went through a learning exercise involving memorization, collaboration, drawing (yes I learned to draw a cow – Thanks VizLit), and envisioning. We were then able to see a representation of our attention and distraction levels, and how, particularly during transitions from one exercise to another, we edged towards overloading, or when as I took a surreptitious glance at my buzzing cellphone the distraction line on my chart went through the roof.
This all led to an intriguing discussion on if how and when we might use such biometric feedback devices to help learners determine the optimum cerebral conditions for study and learning. The prospect seems appealing, but also potentially a privacy invasion on a heretofore unthought level; it’s a door we may not want to open, but I suspect it will be. Trust me, this tech is not ready for primetime in your classroom, sure… but it is available at a small fraction of the cost (around $10K per device) than it was just 5 years ago.
That means we’re really not far off the point where it is feasible to run these exercises, hypothesize, test and learn, in real-time – and think through the results. That’s a really powerful image.
So two examples of learning technology in action – an app and a laboratory neuroscience experiment – that for me exemplify the way we are getting tools to help focus our learning, real learning technology.
We’re going to get some of this wrong, and a lot may end up being not that useful – but we still don’t know what’s coming. So let’s experiment – and find ever better ways to put some real Science into Learning.