As we think about the future of curriculum, the immediate future seems to be defined by the Common Core, although a recent blog post that made its way to my Twitter feed describes 17 (of the 45 CCSS states) that are questioning their participation. (
http://deutsch29.wordpress.com/2013/11/23/common-core-unrest-obvious-in-17-states/) Regardless of the standards documents that we use to organize what we teach, it seems to me that educators must have the flexibility to include fascinating stories and issues as they arise.
An example of the ideas that educators must have the flexibility to include in the curriculum is robot ethics. The ethical questions raised by the increasing role of robots in our economic and political and cultural life are diverse and complex. Everyone knows that Isaac Asimov defined his three rules decades ago:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These rules define how a robot must act (or be programmed to act), but there is evidence that humans' interactions with robots should be the focus of such rules as well.
This recent blog post outlines sone of the issues relevant when considering the emerging field of robot ethics. A challenge anyone to share this with a group of middle school students to see if they are engaged by the topic.
I understand the role of standards and the logic that supports the arguments for using them, but my experience and my knowledge of learners tells me that a curriculum based on problems students perceive to be relevant is an essential foundation for meaningful learning.
No comments:
Post a Comment