Year 1 Recap: Part 1

Our first project year concluded at the beginning of September, and before we start describing our current activities, it seems appropriate to recap what we accomplished in Year 1.  Here I’ve excerpted some of the content from the first year report we submitted at the end of August.

Our goal in Year 1 was to develop and pilot test three analog games, each intended to teach one key computer science concept. We proposed to design each game in three conditions: the game alone, the game presented in a fictional setting, and the game integrated with an interactive story.

Foundational Work

In Fall 2014, the team’s efforts initially focused on creating a common understanding of the project’s goals and more specific objectives, as well as sharing knowledge relevant to different facets of the project. The project team at this stage included the co­-PIs, Gee and Stewart­-Gardiner, consultants Carmichael and Hopping-­Egan, a PhD student and undergraduate research assistant at ASU, and an undergraduate at Kean University. As described in a previous blog post, one activity during this time frame included identifying and analyzing STEM-­focused digital and analog games, using an analytic framework developed by the team to record, for example, if the games had a recognizable plot, the nature of the game protagonist, if any, how well the learning objectives of the game were aligned with the game mechanics, and so forth. A second activity intended to provide background information was conducting focus groups with middle school age girls. Two focus groups were conducted in Arizona and two in New Jersey, involving a total of 15 girls. Focus groups discussed their game play preferences, played digital and analog STEM games and shared their reactions.

Game Development

By mid Fall, the team also began work on the design of the games themselves, which included reviewing various publications on CS principles and concepts, scanning relevant curricula, deciding on a format for the game prototypes and related documents, and drafting initial design documents. By Winter 2015, the team had created complete drafts of game design documents on the topics of algorithms, data representation, and data organization. Each design document outlined a game with three versions (game alone, game with story context, game with story). The game design documents were expanded to include actual game mechanics, rules, context and story, and facilitator directions, so that they could be played. Prototypes of these games, with and without context or story, were initially reviewed and tested by team members, colleagues, game design experts, and university students. All games were modified, and the core game mechanics of two games (data representation and data organization) were significantly revised. In late April the algorithm game in its full story version was tested with 16 girls at the Phoenix Public Library and results used to make additional revisions.

Game Testing

We identified two sites for Summer 2015 game testing: the Phoenix Public Library and a three week summer­ school camp in New Jersey. Our initial plan had been to pilot test our projected research design for Year 2 with 10 participants at each site. However, given that we were still developing and testing the games, we modified our summer plans. The Phoenix Public Library was used for ongoing prototype testing with small groups of girls. Weekly sessions were held in the library’s Makerspace for eight weeks throughout June and July. A total of 12 girls in AZ and 12 girls and 11 boys in NJ participated in these sessions overall. The NJ summer­ school camp was used to test the implementation of all three games in two versions (game alone and game with story context) with two different groups of participants. Each group met for two hours, once a week, for three weeks. There were 9 and 14 participants in the two groups. Since we were working with students in a camp for boys as well as girls, we included both in our game testing. Within the groups, we formed mixed­-gender teams as well as all female and all male teams, enabling us to see potential differences in the participation levels of girls and boys. Having faithful attendance in a school setting with undergraduate facilitators enabled us to closely observe the learning and engagement of the participants. These pilots enabled us to see that one game in particular needed further changes to improve the concept learning. Each game underwent some modifications in preparation for the larger game implementation in Fall 2015.

Assessment

An additional project activity in Summer 2015 has been the design and development of assessment strategies to accompany each game. Currently there are no widely recognized assessment tools for computational thinking or computer science concepts at the secondary level. While developing a comprehensive assessment tool is beyond the scope of this project, we need a means of assessing what students have learned through playing our games. Prototype assessment tools for each game have been designed to assess both declarative and procedural knowledge. Declarative knowledge is assessed through questions that require recognition of key terms such as encoding and algorithm. Procedural knowledge is assessed through a series of performance tasks of increasing complexity (in particular, becoming less similar to the context in which the procedure was learned; i.e., the game).

In the following post, I will summarize some of the design issues that are central to our work.

 

Leave a Reply

Your email address will not be published. Required fields are marked *