A classroom of middle schoolers playing SimCityEDU doesn’t look like a classroom of students being tested on their science knowledge. Students aren’t sitting with their heads down, filling in pale blue scantron bubbles with number two pencils. Instead, they’re engaged, figuring out problems and gently reminding each other to take turns with the mouse along the way.
Created by GlassLab to be aligned with Next Generation Science Standards and Common Core State Standards, SimCityEDU has the unique capability of providing real time data that lets teachers see students’ progress as they learn how cities’ many moving parts interact. It also logs students’ work into massive data sets, allowing researchers, developers, and educators to examine how exactly players are learning. The game integrates learning and assessment simultaneously and uses the resulting data to change what assessment looks like all together.
Pittsburgh institutions are also finding ways to use big data to transform assessment and create new ways for teachers to visualize progress. The phrase “big data” is a bit of an umbrella term for huge data sets collected regularly. But in education, one of the ways big data is coming into play is by making assessments more nuanced, finding patterns and adding layers to grades that used to be flat marks on top of the page.
“We believe we can take the data we get from students’ interactions in the game environment and use it to understand what they know and can do more broadly,” writes Kristen DiCerbo, a learning games scientist at GlassLab. She uses the example of tracking whether large numbers of students add a greener power solution before removing a polluting one so their city doesn’t succumb to blackouts. “That is, do they understand that the power plant has both the effect of powering the city and polluting the air? Or do they just focus on one?”
Teachers can also use the dashboard feature, a real time visual map that monitors where students are getting stuck. (There’s a demo about how it works at minute 20 of this webinar from the Institute of Play.)
“One of the things that we all get frustrated with is that kids get assessed in the spring and they get that data and feedback back in the fall. How is that helpful?” says Connie Yowell, director of education for the MacArthur Foundation, in a video about learning and assessment using SimCityEDU.
Not very helpful, according to Pearson’s Research & Innovation Network. Looking at how technology can enhance a classroom, they find that rapid feedback that technology allows is one of the biggest pluses. As they write:
“Without feedback, misconceptions build. Students unknowingly make the same mistake again and again and can quickly fall behind. With rapid feedback, however, students and teachers can adapt, modify, and innovate within the learning process.” Rapid feedback also solidifies neural pathways that make memories.
Another new tool that lets teachers get real time feedback is LightSide, which evaluates student’s writing via “machine learning.” One of the first things LightSide’s founders state on their site is that they’re not “fans of automated essay grading.” (Want to know more about the automated grading controversy? Read about it here and here.) LightSide says their tool isn’t for spitting out a final grade. It doesn’t track word count or even attempt to detect grammar mistakes.
Instead, LightSide learns from human graders what patterns form effective writing in specific prompts. Unlike other tools that wait until the end to measure achievement, LightSide gives predictions as students are working, helping them continually revise and evaluate their progress.
As with many other edtech innovations, Carnegie Mellon University is also pushing the boundaries of using data to inform learning and teaching with its new Simon Initiative. The initiative aims to create a high-quality, amazingly large data set that researchers can use to study how students learn. According to Inside HigherEd, the data already collected amounts to about 500,000 hours and 200 million times students have clicked through a problem or puzzle.
The Simon Initiative is solving a particular problem of a lot of research: it’s hard (and expensive) to assemble the right data to study problems. Researchers cobble together data from a few sources or start their own study to build a data set. But all of that takes time and a lot of money, and frequently, researchers aren’t even aware of one another’s efforts or data collections. The Simon Initiative is trying to solve those problems by pulling numerous data sets together in one place so researchers and educators can work more collaboratively.
As the initiative’s web site puts it:
Drawing on the expertise and resources of university, industry, and government members, a data bank consortia will collect and store thousands of high-quality data sets, accumulate the best analytic methods available, and create a large research community enabled to improve education through empirical research.
And we should note, the data are depersonalized, with no identifying information attached to the numbers, so anonymity and privacy are assured.
In the end, “big data” is about detecting patterns. For example, one LearnLab study found students who used handwriting finished a set of algebra problems in half the time of those typing in answers. The students had the same rate of mistakes, though, suggesting typing in algebra solutions into a computer only slows kids down. Studies like this are being collected in an open wiki packed with information about how kids learn.
“We look for patterns in the data that suggest places where there are big bumps in the road toward learning,” Kenneth Koedinger, director at the LearnLab, told the Pittsburgh-Post Gazette.
These patterns are formed day in and day out, and can be tracked and analyzed along the way. Efforts like LightSide and SimCityEDU are capitalizing on this new way of assessing learning. And who knows, but it might not be long before students will never hear that dreaded word “test” again.