An inclusive Hanzi learning experience developed with AI image recognition.
Learning a language involves multiple cognitive abilities. There are 4 major learning styles, including visual, auditory, kinaesthetic and tactile. Learners’ preferred learning styles differed significantly based on their backgrounds.
Research shows Chinese learners prefer more tactile and kinaesthetic styles compared to other cultures’ learners. But why?
Hanzi’s originated from oracle bone scripts, which are pictographs and ideographs that represent tangible objects. Chinese Calligraphy is also ingrained in the Hanzi culture not only for functional writing, but also being art from by itself.
We started by choosing two easy-to-understand Hanzi components. The first one is '木' (meaning 'wood'), where the character's shape graphically represents a tree. The second is '口' (meaning 'mouth'), resembling an open mouth.
The meanings of '口' (mouth) and '木' (wood) extend metaphorically to the human body and nature, respectively. Their combinations give rise to a multitude of unique and meaningful Hanzi characters.
Both elements can serve as base characters that, when repeated, form new characters with related meanings. This tessellation aspect is particularly useful in facilitating graphical learning. Here are some examples:
Traditional Hanzi calligraphy employs a four-grid base ('TianZeGe') to position characters of varying structures.
To make the learning process more intuitive and hands-on, we've decided to utilize this grid system to assists learners to learn about hanzi structures.
We've created two paths.
Path 1: When a player assembles an existing Hanzi on the grid, the interface reveals its meaning.
Path 2: If a player forms a new, non-existing Hanzi, they are encouraged to define this new character and can then view similar creations by other users.
The experience has two parts:
Digital: screen to displays educational information.
Physical: A grid, base character elements, and a webcam for interactive hands-on activities
We laser-cut the base components to suit various Hanzi structures while allowing ample creative space. Various materials and finishes were tested.
The tech part is web based. I integrated AI image recognition into the javascript code, and use the machine learning model to read the results of players’ character placement.
The ML model is based on image pixel recognition, so I feed 400+ image samples for each hanzi characters for getting accurate results.
I also set up the backend server to read the users’ data input with Express, and save them to a json file for collecting “New Hanzi Archive”.
The beginner participants still feel confused when being prompted to assemble the hanzi building blocks. They also feel the digital experience is a bit disconnected from the physical one. So I made the following iterations:
1. Introduce the players about what the two base components "木" (wood) and "口" (mouth).
2. Guide the players to create a hanzi "呆" (dull) to learn about Hanzi structure and how to use the grid "TianZiGe".
3. Players are then invited to create Hanzi by themselves. If they just created an existing hanzi, we show them the meaning of their creations based on Hanzi dictionary.
3.1 If they created something non-existent in Hanzi dictionary, they got to define the new hanzi by themselves!
Portable Experience for a language museum
The digital component of our museum is now live on GitHub. Combined with the physical game kit, which includes a webcam, it can be played on a laptop or tablet at home, or even used as an educational activity in schools.
see it on github