-A A +A

ComputingEd

Subscribe to ComputingEd feed ComputingEd
How do people understand computing, and how can we improve that understanding?
Updated: 2 min 40 sec ago

We might want naive and delusional PhD students

Fri, 06/29/2018 - 07:00

We’re in the midst of cleaning out 25 years of accumulated stuff in our house in order to sell this house, buy a new house in Ann Arbor, and move to the University of Michigan by September 1.

As I was cleaning, I found the below — my original statement of purpose that I submitted to the University of Michigan in 1988 to start my doctorate.

I shared it with some friends, ruefully.  It felt silly, as well as grammatically flawed. I really did think that I was going to get a faculty position in “Computer Science and Education” when I graduated in the early 1990’s.  I was naive, maybe even delusional. I had no idea what academic CS was like when I applied. The reality is far different than what I imagined.  At the Home4CS event just this last April, I mentioned that it would be great if we had CS Education faculty slots in Schools of Education today.  As Diane Levitt reported on Twitter, the audience roared with laughter.  How crazy was I to think that we’d have some in the 1990’s?

But now, some positions like that do exist.  There are faculty who have been hired at US higher-education institutions to focus on CS Ed.  My new job at the University of Michigan is a joint position between CS and their Engineering Education Research program.  It took 25 years, but yeah, I’m going to have the kind of job for which I earned my PhD.

Some friends encouraged me to share this statement. Maybe it’s a good thing to have naive new PhD students.  Maybe that’s what we want in PhD students. We want PhD students to think long term, i.e., to have bought into a goal, a set of research questions, or a vision — and be willing to work at it for decades.  Eventually, if the student is really lucky and others are working on similar visions at the same time, the vision doesn’t seem not quite so naive, not quite so delusional.

I’ll be taking some time off from the blog while making the move to Michigan. I may post some guest contributions over the next few weeks, but for now, I’m putting the blog on hiatus.

Visiting NTNU in Trondheim Norway June 3-23

Thu, 06/28/2018 - 07:00

Barbara and I are just back from a three week trip to NTNU in Trondheim, Norway. Katie Cunningham came with us (here’s a blog post about some of her work). Three weeks is enough time to come up with a dozen ideas for blog posts, but I don’t have the cycles for that. So let me just give you the high-level view, with pictures and links to learn more.

We went at the beginning of June because Barb and I (and the University of Michigan) are part of the IPIT network (International Partnerships for Excellent Education and Research in Information Technology) that had its kick-off meeting June 3-5. The partnership is about software engineering and computing education research, with a focus student and faculty exchange and meetings at each others’ institutions: NTNU, U. Michigan, Tsinghua University, and Nanjing University. I learned a lot about software engineering that I didn’t know before, especially about DevOps.

If you ever get the chance to go to a meeting organized by Letizia Jaccheri of NTNU, GO! She was the organizer for IPIT, co-chair of IDC 2018, and our overall host for our three weeks there. She has a wonderful sense for blending productivity with fun. During the IDC 2018 poster session, she brought in high school students dressed as storybook characters, just to wander around and “bring in a bit of whimsy.” For a bigger example, she wanted IPIT to connect with the NTNU campus at Ålesund, which just happens to be near the Geiranger fjord, one of the most beautiful in Norway. So, she flew the whole meeting to Ålesund from Trondheim! We took a large cruise-ship like boat with meeting rooms down the fjord. We got in some 5-6 hours of meetings, while also seeing amazing waterfalls and other views, and then visited the Ålesund campus the next day before flying home. We got work done and WOW!

For the next week and a half, we got to know the computing education research folks at NTNU. We were joined at the end of the first week by Elisa Rubegni from the University of Lincoln, and Roberto Martinez-Maldonado came by a couple days later. Barb, Elisa, and I held a workshop on the first Monday after IPIT. A couple days later, we had a half-day meeting with Michalis Giannakos’s group and Roberto, then Elisa led us all in a half-day design exercise (pictured below — Elisa, Sofia, Javi, and Katie). In between, we had individual meetings. I think I met with every one of the PhD students there working in computing education research. (And, in our non-meeting time, Barb and I were writing NSF proposals!)

Michalis’s group is doing some fascinating work. Let me tell you about some of the projects that most intrigued me.

  • Sofia (with Kshitij and Ilias) is lead on a project where they track what kids using Scratch are looking at, both on and off screen. It’s part of this cool project where kids program these beautiful artist-created robots with Scratch. It’s a pretty crazy looking experimental setup, with fiducial markers on notebooks and robots and screens.
  • Kshitij is trying to measure EEG and gaze in order to determine cognitive load in a user interface. Almost all cognitive load measures are based on self-report (including ours). They’re trying to measure cognitive load physiologically, and correlate it with self-report.
  • Katerina and Kshitij is using eye-tracking to measure how undergrads use tools like Eclipse. What I found most interesting was what they did not observe. I noticed in their data that they had no data on using the debugger. They explained that in 40 students, only five people even looked at the debugger. Nobody used data or control flow visualizations at all. I’m fascinated by this — what does it take to get students to actually look at the debuggers and visualizers that were designed to help them learn?
  • Roberto is doing this amazing work with learning analytics in physical spaces, where nurses are working on robot patients. Totally serious — they can gather all kinds of data about where people are standing, how they interact, and when they interact. For tasks like nursing, this is super important to understand what students are learning.

Then came FabLearn with an amazing keynote by Leah Buechley on art, craft, and computation. I have a long list of things to look up after her talk, including Desmos, computer controlled cutting machines (which I had never heard of before) which are way cheaper than 3-D printers but still allow you to do computational craft, and http://blog.recursiveprocess.com/ which is all about learning coding and mathematics. She made an argument that I find fascinating — that art is what helps diverse students reflect their identity and culture in their school, and that’s why students who get art classes (controlling for SES) are more likely to succeed in school and go onto post-secondary schooling. Can computing make it easier to bring art back into school? Can computing then play a role in engaging children with school again?

The next reason we were at NTNU was to attend the EXCITED Centre advisory board meeting. Barb and I were there for the launch of EXCITED in January 2017. It’s a very ambitious project, starting from students making informed decisions to go into CS/IT, helping students develop identities in CS, learning through construction, increasing diversity in CS, and moving into careers. We got to hang out with Arnold Pears, Mats Daniels, and Aletta Nylén of UpCERG (Upssala Computing Education Research Group), the world’s largest CER group.

Finally, for the last four days, we attended the Interaction, Design and Children Conference, IDC 2018. I wrote my Blog@CACM post for this month about my experiences there. I saw a lot there that’s relevant to people who read this blog. My favorite paper there tested the theory of concreteness fading on elementary school students learning computing concepts. Here’s a picture of a slide (not in the paper) that summarizes the groups in the experiment.

I’ll end with my favorite moment in IDC 2018, not in the Blog@CACM post. We met Letizia’s post-doc, Javier “Javi” Gomez at the end of our first week in Trondheim. Summer weather in Trondheim is pretty darn close to winter in Atlanta. One day, we woke up to 44F and rain. But we lucked out — the weekends were beautiful. On our first Saturday, Letizia invited us all to a festival near her home, and we met Javi and Elisa. That evening (but still bright sunlight), Javi, Elisa, Barb, and I took a wonderful kayaking trip down the Nidelva river. So it was a special treat to be at IDC 2018 to see Javi get TWO

awards for his contributions, one for his demo and an honorable mention for his note. The note was co-authored by Letizia, and was her first paper award (as she talks about in the lovely linked blog post). It was wonderful to be able to celebrate the success of our new friends.

On the way back, Barb and I stopped in London to spend a couple days with Alan Kay and his wife, Bonnie MacBird. If I could come up with a dozen blog post ideas from 3 weeks, it’s probably like two dozen per day with Alan and Bonnie, and we had two days with them. Visiting a science museum with an exhibit on early computers (including an Alto!) is absolutely amazing when you’re with Alan. But those blog posts will have to wait until after my blog hiatus.

We can build new programming languages that people will teach, learn, and use: Scratch 3.0 in August

Mon, 06/25/2018 - 07:00

When I come out with blog posts saying that we need new programming languages (like this one), I regularly get a bunch of skepticism.  People will only use industry-approved languages, says one argument.  We need to teach the languages that exist, says another.

Then I just, “Scratch.”  It’s real programming, it’s popular, and it’s taught around the world.  We ought to study how Scratch succeeded.  One key insight: Don’t beat your head against the traditional CS1 teachers.  There’s a lot more people to teach, and not everyone has to become a software developer.

A new version of Scratch is coming this August!

Source: 3 Things To Know About Scratch 3.0 – The Scratch Team Blog – Medium

It Matters a Lot Who Teaches Introductory Courses if We Want Students to Continue

Fri, 06/22/2018 - 07:00

Thanks to Gary Stager who sent this link to me. The results mesh with Pat Alexander’s Model of Domain Learning. A true novice to a field is not going to pursue studies because of interest in the field — a novice doesn’t know the field. The novice is going to pursue studies because of social pressures, e.g., it’s a requirement for a degree or a job, it’s expected by family or community, or the teacher is motivating.  As the novice becomes an intermediate, interest in the domain can drive further study.  These studies suggest that persistence is more likely to happen if the teacher is a committed, full-time teacher.

The first professor whom students encounter in a discipline, evidence suggests, plays a big role in whether they continue in it.

On many campuses, teaching introductory courses typically falls to less-experienced instructors. Sometimes the task is assigned to instructors whose very connection to the college is tenuous. A growing body of evidence suggests that this tension could have negative consequences for students.

Two papers presented at the American Educational Research Association’s annual meeting in New York on Sunday support this idea.

The first finds that community-college students who take a remedial or introductory course with an adjunct instructor are less likely to take the next course in the sequence.

The second finds negative associations between the proportion of a four-year college’s faculty members who are part-time or off the tenure track and outcomes for STEM majors.

Source: It Matters a Lot Who Teaches Introductory Courses. Here’s Why.

The Story of MACOS: How getting curriculum development wrong cost the nation, and how we should do it better

Mon, 06/18/2018 - 07:00

Man: A Course of Study (MACOS) is one of the most ambitious US curriculum efforts I’ve ever heard about. The goal was to teach anthropology to 10 year olds. The effort was led by world-renowned educational psychologist Jerome Bruner, and included many developers, anthropologists, and educational psychologists (including Howard Gardner). It won awards from the American Education Research Association and from other education professional organization for its innovation and connection to research. At its height, MACOS was in thousands of schools, including whole school districts.

Today, MACOS isn’t taught anywhere. Funding for MACOS was debated in Congress in 1975, and the controversy led eventually to the de-funding of science education nationally.

Peter Dow’s 1991 book Schoolhouse Politics: Lessons from the Sputnik Era is a terrific book which should be required reading for everyone involved in computing education in K-12. Dow was the project manager for MACOS, and he’s candid in describing what they got wrong. It’s worthwhile understanding what happened so that we might avoid it in computing education. I just finished reading it, and here are some of the parts that I found particularly insightful.

First, Dow doesn’t dismiss the critics of MACOS. Rather, he recognizes that the tension is between learning objectives. What do we want for our children? What kind of society do we want to build?

I quickly learned that decisions about educational reform are driven far more by political considerations, such as the prevailing public mood, than they are by a systematic effort to improve instruction. Just as Soviet science supremacy had spawned a decade of curriculum reform led by some of our most creative research scientists during the late 1950s and 1960s, so now a new wave of political conservatism and religious fundamentalism in the early 1970s began to call into question the intrusion of university academics into the schools…Exposure to this debate caused me to recast the account to give more attention to educational politics. No discussion of school reform, it seems, can be separated from our vision of the society that the schools serve.

MACOS was based in the best of educational psychology at the time. Students engaged in inquiry with first-hand accounts, e.g., videos of Eskimos. The big mistake the developers made was they gave almost no thought to how it was going to get disseminated. Dow points out that MACOS was academic researchers intruding into K-12, without really understanding K-12. They didn’t plan for teacher professional development, and worse, didn’t build any mechanism for teachers to tell them how the materials should be changed to work in real classrooms. They were openly dismissive of the publishers who might get the materials into the world.

On teachers: There was ambivalence about teachers at ESI. On the one hand the Social Studies Program viewed its work as a panacea for teachers, a liberation from the drudgery of textbook materials and didactic lessons. On the other, professional educators were seen as dull-witted people who conversed in an incomprehensible “middle language” and were responsible for the uninspired state of American education.

On publishers: These two experienced and widely respected publishing executives listened politely while Bruner described our lofty education aspirations with characteristic eloquence, but the discussion soon turned to practical matters such as the procedures of state adoption committees, “tumbling test” requirements, per-pupil expenditures, readability formulas, and other restrictions that govern the basal textbook market. Spaulding and Kaplan tried valiantly to instruct us about the realities of the educational publishing world, but we dismissed their remarks as the musings of men who had been corrupted by commercialism. Did they not understand that our mission was to change education, not submit to the strictures that had made much of instruction so meaningless? Could not men so powerful in the publishing world commit some of their resources to support curriculum innovation? Had they no appreciation of the intellectual poverty of most social studies classrooms? I remember leaving that room depressed by the monumental conservatism of our visitors and more determined than ever to prove that there were ways to reach the schools with good materials. Our arrogance and naivete were not so easily cured.

By 1971, Dow realizes that the controversies around MACOS could easily have been avoided. They had made choices in their materials that highlighted the challenges of Eskimo life graphically, but the gory details weren’t really necessary to the learning objectives. They simply hadn’t thought enough about their users, which included the teachers, administrators, parents, and state education departments.

My favorite scene in the book is with Margaret Mead who tries to help Dow defend MACOS in Congress, but she’s frustrated by their arrogance and naivete.

Mead’s exasperation grew. “What do you tell the children that for?…I have been teaching anthropology for forty years,” she remarked, “and I have never had a controversy like this over what I have written.”

But Mead’s anger quickly returned. “No, no, you can’t tell the senators that! Don’t preach to them! You and I may believe that sort of thing, but that’s not what you say to these men. The trouble with you Cambridge intellectuals is that you have no political sense!”

Dow describes over two chapters the controversies around MACOS and the aftermath impacts on science education funding at NSF. But he also points out the problems with MACOS as a curriculum. Some of these are likely problems we’re facing in CS for All efforts.

For example, he talks about why MACOS was removed from Oregon schools, using the work of Lynda Falkenstein. (Read the below with an awareness of the Google-Gallup and EdWeek polls showing that administrators and principals are not supportive of CS in schools.)

She concluded that innovations that lacked the commitment of administrators able to provide long-term support and continuing teacher training beyond the initial implementation phase were bound to faster regardless of their quality. Even more than controversy, she found, the greatest barrier to successful innovation was the lack of continuity of support from the internal structure of the school system itself.

I highly recommend Schoolhouse Politics. It has me thinking about what it really takes to get any education reform to work and to scale. The book is light on evaluation evidence that MACOS worked. For example, I’m concerned that MACOS was so demanding that it may have been too much for underprepared students or teachers. I am totally convinced that it was innovative and brilliant. One of the best curriculum design efforts I’ve ever read about, in terms of building on theory and innovative design. I am also totally convinced that it wasn’t ready to scale — and the cost of that mistake was enormous. We need to avoid making those mistakes again.

Are you talking to me? Interaction between teachers and researchers around evidence, truth, theory, and decision-making

Fri, 06/15/2018 - 01:00

In this blog, I’m talking about computing education research, but I’m not always sure and certainly not always clear about who I’m talking to. That’s a problem, but it’s not just my problem. It’s a general problem of research, and a particular problem of education research. What should we say when we’re talking to researchers, and what should we say when we’re talking to teachers, and where do we need to insert caveats or explain assumptions that may not be obvious to each audience?

From what I know of philosophy of science, I’m a post-positivist. I believe that there is an objective reality, and the best tools that we humans have to understand it are empirical evidence and the scientific method. Observations and experiments have errors and flaws, and our perspectives are biased. All theory should be questioned and may be revised. But that’s not how everyone sees the world, and what I might say in my blog may be perceived as a statement of truth, when the strongest statement I might make is a statement of evidence-supported theory.

It’s hard to bridge the gap between researchers and education. Lauren Margulieux shared on Twitter a recent Educational Researcher article that addresses the issue. It’s not about getting teachers access to journal articles, because those articles aren’t written to speak to nor address teachers’ concerns. There have to be efforts from both directions, to help teachers to grok researchers and researchers to speak to teachers.

I have three examples to concretize the problem.

Recursion and Iteration

I wrote a blog post earlier this month where I stated that iteration should be taught before recursion if one is trying to teach both. For me, this is a well-supported statement of theory. I have written about the work by Anderson and Wiedenbeck supporting this argument. I have also written about the terrific work by Pirolli exploring different ways to teach recursion, which fed into the work by Anderson.

In the discussion on the earlier post, Shriram correctly pointed out that there are more modern ways to teach recursion, which might make it better to teach before iteration. Other respondents to that post point out the newer forms of iteration which are much simpler. Anderson and Wiedenbeck’s work was in the 1980’s. That sounds great — I would hope that we can do better than what we did 30 years ago. I do not know of studies that show that the new ways work better or differently than the ways of the 1980’s, and I would love to see them.

By default, I do not assume that more modern ways are necessarily better. Lots of scientists do explore new directions that turn out to be cul-de-sacs in light of later evidence (e.g., there was a lot of research in learning styles before the weight of evidence suggested that they didn’t exist). I certainly hope and believe that we are coming up with better ways to teach and better theories to explain what’s going on. I have every reason to expect that the modern ways of teaching recursion are better, and that the FOR EACH loop in Python and Java works differently than the iteration forms that Anderson and Wiedenbeck studied.

The problem for me is how to talk about it.  I wrote that earlier blog post thinking about teachers.  If I’m talking to teachers, should I put in all these caveats and talk about the possibilities that haven’t yet been tested with evidence? Teachers aren’t researchers. In order to do their jobs, they don’t need to know the research methods and the probabilistic state of the evidence base. They want to know the best practices as supported by the evidence and theory. The best evidence-based recommendation I know is to teach iteration before recursion.

But had I thought about the fact that other researchers would be reading the blog, I would have inserted some caveats.  I mean to always be implicitly saying to the researchers, “I’m open to being proven wrong about this,” but maybe I need to be more explicit about making statements about falsifiability. Certainly, my statement would have been a bit less forceful about iteration before recursion if I’d thought about a broader audience.

Making Predictions before Live Coding

I’m not consistent about how much evidence I require before I make a recommendation. For a while now, I have been using predictions before live coding demonstrations in my classes. It’s based on some strong evidence from Eric Mazur that I wrote about in 2011 (see blog post here). I recommend the practice often in my keynotes (see the video of me talking about predictions at EPFL from March 2018).

I really don’t have strong evidence that this practice works in CS classes. It should be a pretty simple experiment to test the theory that predictions before seeing program execution demonstrations helps with learning.

  • Have a set of programs that you want students to learn from.
  • The control group sees the program, then sees the execution.
  • The experimental group sees the program, writes down a prediction about what the execution will be, then sees the execution.
  • Afterwards, ask both groups about the programs and their execution.

I don’t know that anybody has done this experiment. We know that predictions work well in physics education, but we know that lots of things from physics education do not work in CS education. (See Briana Morrison’s dissertation.)

Teachers have to do lots of things for which we have no evidence. We don’t have enough research in CS Ed to guide all of our teaching practice. Robert Glaser once defined education as “Psychology Engineering,” and like all engineers, teachers have to do things for which we don’t have enough science. We make our best guess and take action.

So, I’m recommending a practice for which I don’t have evidence in CS education. Sometimes when I give the talk on prediction, I point out that we don’t have evidence from CS. But not always. I probably should. Maybe it’s enough that we have good evidence from physics, and I don’t have to get into the subtle differences between PER and CER for teachers. Researchers should know that this is yet another example of a great question to be addressed. But there are too few Computing Education Researchers, and none that I know are bored and looking for new experiments to run.

Code.org and UTeach CSP

Another example of the complexity of talking to teachers about research is reflected in a series of blog posts (and other social media) that came out at the end of last year about the AP CS Principles results.

  • UTeach wrote a blog post in September about the excellent results that their students had on the AP CSP exam (see post here). They pointed out that their pass rate (83%) was much higher than the national average of 74%, and that advantage in pass rates was still there when the data were disaggregated by gender or ethnicity.
  • There followed a lot of discussion (in blog posts, on Facebook, and via email) about what those results said about the UTeach curriculum. Should schools adopt the UTeach CSP curriculum based on these results?
  • Hadi Partovi of Code.org responded with a blog post in October (see post here). He argued that exam scores were not a good basis for making curriculum decisions. Code.org’s pass rates were lower than UTeach’s (see their blog post on their scores), and that could likely be explained by Code.org’s focus on under-represented and low-SES student groups who might not perform as well on the AP CSP for a variety of reasons.
  • Michael Marder of UTeach responded with two blog posts. One conducted an analysis suggesting that UTeach’s teacher professional development, support, and curriculum explained their difference from the national average (see post here), i.e., it wasn’t due to what students were served by UTeach. A second post tried to respond to Hadi directly to show that UTeach did particularly well with underrepresented groups (see post here).

I don’t see that anybody’s wrong here. We should be concerned that teachers and other education decision-makers may misinterpret the research results to say more than they do.

  • The first result from UTeach says “UTeach’s CSP is very good.” More colloquially, UTeach doesn’t suck. There is snake oil out there. There are teaching methods that don’t actually work well for anyone (e.g., we could talk some more about learning styles) or only work for the most privileged students (e.g., lectures without active learning supports). How do you show that your curriculum (and PD and support) is providing value, across students in different demographic groups? Comparing to the national average (and disaggregated averages) is a reasonable way to do it.
  • There are no results saying that UTeach is better than Code.org for anyone, or vice-versa. I know of no studies comparing any of the CSP curricula. I know of no data that would allow us to make these comparisons. They’re hard to do in a way that’s convincing. You’d want to have a bunch of CSP students and randomly assign them to either UTeach and Code.org, trying to make sure that all relevant variables (like percent of women and underrepresented groups) is the same in each. There are likely not enough students taking CSP yet to be able to do these studies.
  • Code.org likely did well for their underrepresented students, and so did UTeach. It’s impossible to tell which did better. Marder is arguing that UTeach did well with underrepresented groups, and UTeach’s success was due to their interventions, not due to the students who took the test.  I believe that UTeach did well with underrepresented groups. Marder is using statistics on the existing data collected about their participants to make the argument about the intervention. He didn’t run any experiments. I don’t doubt his stats, but I’m not compelled either. In general, though, I’m not worried about that level of detail in the argument.

All of that said, teachers, principals, and school administrators have to make decisions. They’re engineers in the field. They don’t have enough science. They may use data like pass rates to make choices about which curricula to use. From my perspective, without a horse in the race or a dog in the fight, it’s not something I’m worried about. I’m much more concerned about the decision whether to offer CSP at all. I want schools to offer CS, and I want them to offer high-quality CS. Both UTeach and Code.org offer high-quality CS, so that choice isn’t really a problem. I worry about schools that choose to offer no CSP or no CS at all.

Researchers and teachers are solving different problems. There should be better communication. Researchers have to make explicit the things that teachers might be confused about, but they might not realize what the teachers are confused about. In computing education research and other interdisciplinary fields, researchers may have to explain to each other what assumptions they’re making, because their assumptions are different in different fields. Teachers may use research to make decisions because they have to make decisions. It’s better for them to use evidence than not to use evidence, but there’s a danger in using evidence to make invalid arguments — to say that the evidence implies more than it does.

I don’t have a solution to offer here. I can point out the problem and use my blog to explore the boundary.

Workshops for New Computing Faculty in Summer 2018: Both Research and Teaching Tracks

Tue, 06/12/2018 - 06:00

This is our fourth year, and our last NSF-funded year, for the New Computing Faculty Workshops which will be held August 5-10, 2018 in San Diego. The goal of the workshops is to help new computing faculty to be better and more efficient teachers. By learning a little about teaching, we will help new faculty (a) make their teaching more efficient and effective and (b) make their teaching more enjoyable. We want students to learn more and teachers to have fun teaching them. The workshops were described in Communications of the ACM in the May 2017 issue (see article here) which I talked about in this blog post. The workshop will be run by Beth Simon (UCSD), Cynthia Bailey Lee (Stanford), Leo Porter (UCSD), and Mark Guzdial (Georgia Tech).

This year, for the first time, we will offer two separate workshop tracks:

  • August 5-7 will be offered to tenure-track faculty starting at research-intensive institutions.
  • August 8-10 will be offered to faculty starting a teaching-track job at any school, or a tenure-track faculty line at a primarily undergraduate serving institution where evaluation is heavily based in teaching.

The new teaching-oriented faculty track is being added this year due to enthusiasm and feedback we heard from past participants and would-be participants. When I announced the workshops last year (see post here), we heard complaints (a little on email, and a lot on Twitter) asking why we were only including research-oriented faculty and institutions. We did have teaching-track faculty come to our last three years of new faculty workshops that were research-faculty focused, and unfortunately those participants were not satisfied. They didn’t get what they wanted or needed as new faculty. Yes, the sessions on peer instruction and how to build a syllabus were useful for everyone. But the teaching-track faculty also wanted to know how to set up their teaching portfolio, how to do research with undergraduate students, and how to get good student evaluations, and didn’t really care about how to minimize time spent preparing for teaching and how to build up a research program with graduate students while still enjoying teaching undergraduate students.

So, this year we made a special extension request to NSF, and we are very pleased to announce that the request was granted and we are able to offer two different workshops. The content will have substantial overlap, but with a different focus and framing in each.

To apply for registration, To apply for registration, please apply to the appropriate workshop based on the type of your position: research-focused position http://bit.ly/ncsfw2018-research or teaching-focused position http://bit.ly/ncsfw2018-teaching. Admission will be based on capacity, grant limitations, fit to the workshop goals, and application order, with a maximum of 40 participants. Apply on or before June 21 to ensure eligibility for workshop hotel accommodation. (We will notify respondents by June 30.)

Many thanks to Cynthia Lee who helped a lot with this post

Reflections of a CS Professor and an End-User Programmer

Mon, 06/11/2018 - 02:00

In my last blog post, I talked about the Parsons problems generator that I used to put scrambled code problems on my quiz, study guide, and final exam. I’ve been reflecting on the experience and what it suggests to me about end-user programming.

I’m a computing professor, and while I enjoy programming, I mostly code to build exercises and examples for my students. I almost never code research prototypes anymore. I only occasionally code scripts that help me with something, like cleaning data, analyzing data, or in this case, generating problems for my students. In this case, I’m a casual end-user programmer — I’m a non-professional programmer who is making code to help him with some aspect of his job. This is in contrast:

  • To Philip Guo’s work on conversational programmers, who are people who learn programming in order to talk to programmers (see his post describing his papers on conversational programmers). I know how to talk to programmers, and I have been a professional programmer. Now, I have a different job, and sometimes programming is worthwhile in that job.
  • To computational scientists and engineers, who Software Carpentry addresses. Computational scientists and engineers do not write code occasionally to solve a problem. They write code as part of their research.  I might write a script to handle an odd-job, but most of my research is not conducted with code.

Why did I spend the time writing a script to generate the problems in LaTeX? I was teaching a large class, over 200 students. Mistakes on quizzes and exams at that scale are expensive in terms of emails, complaints, and regrading. Scrambled code problems are tricky. It’s easy to randomly scramble code. It’s harder to keep track of the right ordering. I needed to be able to do this many times.

Was it worthwhile? I think it was. I had a couple Parsons problems on the quiz, maybe five on the study guide, and maybe three on the final exam. (Different numbers at different stages of development.) Each one got generated at least twice as I refined, improved, or fixed the problem. (One discovery: Don’t include comments. They can legally go anywhere, so it only makes grading harder.) The original code only took me about an hour to get working. The script got refined many times as I used it, but the initial investment was well worth it for making sure that the problem was right (e.g., I didn’t miss any lines, and indentation was preserved for Python code) and the solution was correct.

Would it be worthwhile for anyone else to write this script facing the same problems? That’s a lot harder question.

I realized that I brought a lot of knowledge to bear on this problem.

  • I have been a professional programmer.
  • I do not use LiveCode often, but I have used HyperTalk a lot, and the environment is forgiving with lots of help for casual programmers like me. LiveCode doesn’t offer much for data abstraction — basically, everything is a string.  I have experience using the tool’s facility with items, words, lines, and fields to structure data.
  • I know LaTeX and have used the exam class before. I know Python and the fact that I needed to preserve indentation.

Then I realized that it takes almost as much knowledge to use this generator. The few people who might want to use the Parsons problem generator that I posted would have to know about Parsons problems, want to use them, be using LaTeX for exams, and know how to use the output of the generator.

But I bet that all (or the majority?) of end-user programming experiences are like this. End-users are professionals in some domain. They know a lot of stuff. They’ll bring a lot of knowledge to their programming activity. The programs will require a lot of knowledge to write, to understand, and to use.

One of the potential implications is that this program (and maybe most end-user programs?) are probably not useful to many others.  Much of what we teach in CS1 for CS majors, or maybe even in Software Carpentry, is not useful to the occasional, casual end-user programmer.  Most of what we teach is for larger-scale programming.  Do we need to teach end-user programmers about software engineering practices that make code more readable by others?  Do we need to teach end-user programmers about tools for working in teams on software if they are not going to be working in teams to develop their small bits of code? Those are honest questions.  Shriram Krishnamurthi would remind me that end-user programmers, even more than any other class of programmers, are more likely to make errors and less likely to be able to debug, so teaching them practices and tools to catch and fix errors is particularly important for them.  That’s a strong argument, but I also know that, as an end-user programmer myself, I’m not willing to spend a lot of time that doesn’t directly contribute towards my end goal.  Balancing the real needs of end-user programmers with their occasional, casual use of programming is an interesting challenge.

The bigger question that I’m wondering about is whether someone else, facing a similar problem, could learn to code with a small enough time investment to make it worthwhile. I did a lot of programming in HyperTalk when I was a graduate student. I have that investment to build on. How much of an investment would someone else have to make to be able to write this kind of script as easily?

Why LiveCode? Why not Python? Or Smalltalk? I was originally going to write this in Python. Why not? I was teaching Python, and the problems would all be in Python. It’d good exercise for me.

I realized that I didn’t want to deal with files or a command line. I wanted a graphical user interface. I wanted to paste some code in (not put it in a file), and get some text that I could copy (not find it in one or more files). I didn’t want to have to remember what function(s) to call. I wanted a big button. I simply don’t have the time to deal with the cognitive load of file names and function names. Copy-paste the sorted code, press the button, then copy-paste the scrambled code and copy-paste the solution. I could do that. Maybe I could build a GUI in Python, but every time I have used a GUI tool in Python, it was way more work than LiveCode.

I also know Smalltalk better than most. Here’s a bit of an embarrassing confession: I’ve never really learned to build GUIs in Smalltalk. I’ve built a couple of toy examples in Morphic for class. But a real user interface with text areas that really work? That’s still hard for me. I didn’t want to deal with learning something new. LiveCode is just so easy — select the tool, drag the UI object into place.

LiveCode was the obvious answer for me, but that’s because of who I am and the background that I already have. What could we teach future professionals/end-user programmers that (a) they would find worthwhile learning (not too hard, not too time-consuming) and (b) they could use casually when they needed it, like my Parsons problem generator? That is an interesting computing education research question.

How does a student determine “worthwhile” when deciding what programming to learn for future end-user programming?  Let’s say that we decided to teach all STEM graduate students some programming so that they could use it in their future professional practice as end-user programmers.  What would you teach them?  How would they judge something “worthwhile” to learn for later?

We know some answers to this question.  We know that students judge the authenticity of the language based on what they see themselves doing in the future and what the current practice is in that field (see Betsy DiSalvo’s findings on Glitch and our results on Media Computation).

But what if that’s not a good programming language? What if there’s a better one?  What if the common practice in a field is ill-informed? I’m going to be that most people, faced with the general problem I was facing (wanting a GUI to do a text-processing task) would use JavaScript.  LiveCode is way better than JavaScript for an occasional, casual GUI task — easier to learn, more stable, more coherent implementation, and better programming support for casual users.  Yet, I predict most people would choose JavaScript because of the Principle of Social Proof.

I’ve been reading Robert Cialdini’s books on social psychology and influence, and he explains that social proof is how people make decisions when they’re uncertain (like how to choose a programming language when they don’t know much about programming) and there are others to copy.

First, we seem to assume that if a lot of people are doing the same thing, they must know something we don’t. Especially when we are uncertain, we are willing to place an enormous amount of trust in the collective knowledge of the crowd. Second, quite frequently the crowd is mistaken because they are not acting on the basis of any superior information but are reacting, themselves, to the principle of social proof.

Cialdini PhD, Robert B.. Influence (Collins Business Essentials) (Kindle Locations 2570-2573). HarperCollins. Kindle Edition.

How many people know both JavaScript and LiveCode well?  And don’t consider computer scientists. You can’t convince someone by telling them that computer scientists say “X is better than Y.”  People follow social proof of people that they judge to be similar to them. It’s got to be someone in their field, someone who works like them.

It would be hard to teach the graduate students something other than what’s in common practice in their fields, even if it’s more inefficient to learn and harder to use than another choice.

A Generator for Parsons problems on LaTeX exams and quizzes

Fri, 06/08/2018 - 02:00

I just finished teaching my Introduction to Media Computation a few weeks ago to over 200 students. After Barb finished her dissertation on Parsons problems this semester, I decided that I should include Parsons problems on my last quiz, on the final exam study guide, and on the final exam. It’s a good fit for the problem. We know that Parsons problems are a more sensitive measure of learning than code writing problems, they’re just as effective as code writing or code fixing problems for learning (so good for a study guide), and they take less time than code writing or fixing.

Barb’s work used an interactive tool for providing adaptive Parsons problems. I needed to use paper for the quiz and final exam. There have been several Parsons problems paper-based implementation, and Barb guided me in developing mine.

But I realized that there’s a challenge to doing a bunch of Parsons problems like this — what happens when you find that you got something wrong? The quiz, study guide, and final exam were all going to iterate several times as we developed them and tested them with the teaching assistants. How do I make sure that I kept lined up the scrambled code and the right answer?

I decided to build a gadget in LiveCode to do it.

I paste the correctly ordered code into the field on the left. When I press “Scramble,” a random ordering of the code appears (in a Verbatim LaTeX environment) along with the right answers, to be used in the LaTeX exam class. If you want to list a number of points to be associated with each correct line, you can put a number into the field above the solution field. If empty, no points will be explicitly allocated in the exam document.

I’d then paste both of those fields into my LaTeX source document. (I usually also pasted in the original source code in the correct order, so that I could fix the code and re-run the scramble when I inevitably found that I did something wrong.)

The wording of the problem was significant. Barb coached me on the best practice. You allow students to write just the line number, but encourage them to write the whole line because the latter is going to be less cognitive load for them.

Unscramble the code below that halves the frequency of the input sound.

Put the code in the right order on the lines below. You may write the line numbers of the scrambled code in the right order, or you can write the lines themselves (or both). (If you include both, we will grade the code itself if there’s a mismatch.)

The problem as the student sees it looks like this:

The exam class can also automatically generate a version of the exam with answers for used in grading. I didn’t solve any of the really hard problems in my script, like how do I deal with lines that could be put in any order. When I found that problem, I just edited the answer fields to list the acceptable options.

I making the LiveCode source available here: http://bit.ly/scrambled-latex-src

LiveCode generates executables very easily. I have generated Windows, MacOS, and Linux executables and put them in a (20 Mb, all three versions) zip here: http://bit.ly/scrambled-latex

I used this generator probably 10-20 times in the last few weeks of the semester. I have been reflecting on this experience as an example of end-user programming. I’ll talk about that in the next blog post.

Teach two languages if you have to: Balancing ease of learning and learning objectives

Mon, 06/04/2018 - 07:00

My most recent CACM Blog post addresses a common question in computer science education: Should we teach two programming languages in a course to encourage abstraction, or just one? Does it hurt students to teach two? Does it help them to learn a second language earlier? My answer (in really short form) is “Just teach one, because it takes longer to learn one than you expect. If you teach two or more, students are going to struggle to develop deep understanding.”

But if your learning objective is for students to learn two (or more languages), teach two or more languages. You’re going to have to pay the piper sometime. Delaying is better, because it’s easier and more effective to transfer deep knowledge than to try to transfer surface-level representations.

The issue is like the question of recursion-first or iterative-control-structures-first. (See this earlier blog post.) If your students don’t have to learn iterative control structures, then teach recursion-only. Recursion is easier and more flexible. But if you have to teach both, teach iteration first. Yes, iteration is hard, and learning iteration-first makes recursion harder to learn later, but if you have to do it, iteration-first is the better order.

There’s a lot we know about making computing easier to learn. But sometimes, we just can’t use it, because there are external forces that require certain learning objectives.

Integrating CS into other fields, so that other fields don’t feel threatened: Interview with Jane Prey

Fri, 06/01/2018 - 07:00

I really enjoyed the interview in the last SIGCSE Bulletin with Jane Prey.  Her reason for doing more to integrate CS into other disciplines, at the undergraduate level, is fascinating — one I hadn’t heard before.

Other fields are nervous because they think we’re taking so many students from them, and universities are nervous because they’re afraid of losing us to industry. I would hate to lose any other faculty position to add a CS professor. I really believe it’s important for computing professionals to be well-rounded, to be able to appreciate what they learned in history, biology, and anthropology classes. We need to do a better job of integrating more of a student’s educational experiences. For example, how do we do more work together with the education schools? We just aren’t there. We have to work cross-disciplines to develop a path forward, even though it’s really hard.

A Place to Get Feedback and Develop New Ideas: WIPW at ICER 2018

Wed, 05/30/2018 - 07:00

Everybody’s got an idea that they’re sure is great, or could be great with just a bit of development. Similarly, everyone has hit a tricky crossroads in their research and could use a little nudge to get unstuck. The ICER Work in Progress workshop is the place to get feedback and help on that idea, and give feedback and help to others on their cool ideas. I did it a few years ago at the Glasgow ICER and had a wonderful day. You learn a lot, and you get a bunch of new insights about your own idea. As Workshop Leader (and the inventor of the ICER Work in Progress workshop series) Colleen Lewis put it, “You get the chance to borrow the brains of some really awesome people to work on your problem.”

Colleen is the Senior Chair again this year, and I’m the Junior Chair-in-Training.

The workshop is only one day and super-fun. If you’re attending ICER this year, please apply for the Work in Progress workshop! https://icer.hosting.acm.org/icer-2018/work-in-progress/ The application is due June 8 (it’s just a quick Google form).

Let Colleen or I know if you have questions!

Some principals are getting interested in CS, but think pressure for CS is mostly coming from Tech companies

Mon, 05/28/2018 - 07:00

How do high school principals in small, medium and large districts view the Computer Science for All movement?

 

High school leaders in smaller districts are most enthusiastic about the trend, a new survey by the Education Week Research Center found. Overall, 30% of all principals say CS is not “on their radar,” and 32% say CS is an “occasional supplement or enrichment opportunity.”  I found the two graphs above interesting.  The majority of principals aren’t particularly excited by CS, and most principals think that it’s the Tech firms that are pushing CS onto schools, not parents.

Source: Principals Warm Up to Computer Science, Despite Obstacles

Andrew McGettrick receives 2018 ACM Presidential Award for contributions to computing education

Fri, 05/25/2018 - 07:00

Don Gotterbarn, Andrew McGettrick and Fabrizio Gagliardi will receive 2018 ACM Presidential Awards.

Andrew McGettrick, honored for his unwavering commitment to computer science education—particularly in terms of its quality, breadth, and access—for generations of students worldwide. McGettrick served as chair of ACM’s Education Board and Education Council for over 15 years, leaving an indelible imprint as a passionate advocate for equipping computer science students with the knowledge, skills, and tools to succeed in the field. During his tenure, he steered the development of key curricula in computer science and software engineering. In recent years, he has played an instrumental role in championing European educational efforts and professional societies, through his work with ACM’s Europe Council and Informatics Europe. McGettrick was one of the leading forces behind the Informatics for All initiative, an acclaimed report that explores strategies for Informatics education in Europe at all levels.

I am so thrilled to see Andrew receive this award. It’s so well-deserved.  The paragraph above gives a good summary, but doesn’t capture how Andrew has had such an impact in computing education.  He’s a diplomat, tireless and stalwart.  He’s such a nice guy. He draws you in, talks to you, listens to you, recognizes your concerns, and helps reach a position that meets everyones’ needs.  I worked with him for several years on some of his initiatives, and was always impressed with his thoughtfulness, kindness, and work ethic. Few people I know have had such broad impact on computing education, across multiple continents.

Congratulations to Andrew!

Source: Three leaders will receive 2018 ACM Presidential Awards for contributions to computer ethics, education and public policy

Computer science education is far bigger than maker education: A post in lieu of a talk #InfyXRoads

Mon, 05/21/2018 - 07:00

I was scheduled to speak this Thursday in the final plenary panel of the Infosys Foundations USA CrossRoads 2018 conference (see program here). My father passed away on May 10, and we just had the funeral Friday May 18, so I apologized and cancelled the trip. I had already thought about what I wanted to say, so here’s a blog post in lieu of a panel presentation.

The session is “Why Teach CS? Why Teach Making?” with Yasmin Kafai, Quincy Brown, and Colleen Lewis. The session was inspired in part by my blog post listing the reasons for teaching programming, and was framed in our preliminary discussions as a debate. Is there a difference between CS education and Maker education? Yasmin was tasked with making the argument that they are pretty much the same. I disagree with that position. Colleen was moderating, and Quincy was still keeping her cards close to her chest — I don’t know what position she’s going to take Thursday.

If our goal is to teach the basics of programming, sure, maker education (where we teach students to make physical devices with embedded computation, such as e-textiles, robotics, or Lego Mindstorms devices) and the kind of computing education that I see reflected in the K-12 CS Framework is pretty much the same. There’s some CS education in there. Students learn the basics of sequential execution, conditionals, and looping. But that’s not the same as computer science education.

If our goal is to change students attitudes towards technology, then sure, maker education may be even more effective than computing education for getting students to see the technology in their world. By making their own technology, students may increase their self-efficacy, and help them to feel that they can and should have control over the technology in their lives. But again, that’s not the same as teaching students computer science.

The big ideas of computer science are much bigger than maker education. Here are three examples.

The questions that Alan Turing was trying to answer when he invented the Turing Machine were “What is computable? What are the limits of mathematics? What is not computable? Is even human intelligence computable?” These are as meta as you can get. This is the heart of computer science, as the science of abstraction. These aren’t ideas students currently explore in maker education. Maybe they could, but certainly don’t require a maker context.

One of the most powerful ideas associated with Turing Machines is that any computer can simulate any other computer, including being many other computers with many processes. That’s the big idea that Alan Perlis was talking about in 1961 when he talked about computer science as the study of process. That’s one of the big ideas behind object-oriented programming as Alan Kay defined it.  We don’t explore simulation in maker education, and it’s hard to imagine how we might.

 

Ada Lovelace was the world’s first computer programmer. More than that, she was the first to realize that computers were about programming anything. Quoting from her Wikipedia page:

Ada saw something that Babbage in some sense failed to see. In Babbage’s world his engines were bound by number…What Lovelace saw—what Ada Byron saw—was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper.

Maker education isn’t about general computation. It’s about computing associated with sensors and actuators. Computer science education is about computing everything, from numbers to letters to musical notes. Having to connect the computation to a device made by the student limits the space of what you might compute. Computer science is about representation and abstractions on representations. Everything can be defined in terms of bits. That’s a big idea.  You can probably teach that concept in maker education, but it can be taught (and more easily) without tying it to maker education.

Most of us know Grace Hopper’s name today, but probably more for her iconic status and as the namesake for the Grace Hopper Conference than for what she actually did. Admiral Grace Hopper led the effort to create compiled programming languages, including (eventually) COBOL. There are so many big ideas in here, but let’s just take two.

  • Automatic programming means that you have a program specified in one language (like COBOL or Java or Scratch) and you use that as input to a program that generates another language written in another language (used to be machine language, but JavaScript is probably more common today). A compiler is a program that inputs a program and generates another a program. That is a powerful, meta idea that students do not typically see in maker education. Could we teach about compilers in maker education?  Maybe, but “making” is certainly not the easiest and most obvious way to talk about compilers — it’s another way computing education is bigger than maker education.
  • COBOL was about making programming accessible by using words and concepts familiar to the end users. (It was also about designing a compiled language that would work on any underlying computer, which connects back to Turing’s machine.) Designing for others who are not you and have different expertise than you is one of the most fundamental ideas of human-computer interface design today. Do we get to that in maker education? That big idea occurs more often in non-maker contexts, e.g., making apps for others and using user-centered design to get there.

Bottomline: CS education is so much bigger than maker education. You can explore a lot of computer science using student-made devices as a context. Ben Shapiro has shown that he can have kids playing with powerful modern-day computing ideas from networking to machine learning, all using student-made devices. That’s serious CS education. But it’s not all of CS education, and you can do CS education apart from student-made devices. Maker and CS education are not one-to-one.

There is an equity component here. We often talk about Ada Lovelace and Grace Hopper when we talk about the women who were part of the creation of computer science. We do them a disservice if we only remember them as early members of a category “women in computing.” It’s important to recognize what they actually did, what they contributed to computer science — and we should teach that. What Lovelace and Hopper did mattered, and we demonstrate that it mattered by teaching it and explaining why it’s important.  Ideas like data representation and compilers are not today taught in maker education, are not easily taught in maker education, and can certainly be taught without maker education.

The big ideas that Turing, Lovelace, and Hopper created and explored are not new. This shouldn’t be the realm of advanced CS any more.  An important goal of computer science education should be to teach these foundational ideas of computer science.  I don’t think we know how to get there yet, but that should be our goal. We should be teaching the computer science developed by the people we hold up as heroes, leaders, and role models.

We can teach a lot with maker education, but let’s make sure that we don’t miss out on what CS education is about. Maker education is a great idea. It’s a terrific context for learning some of CS. If we only focus on the intersection of maker and CS education, we might miss the other, far bigger ideas that are in computer science.

Is there a “hype cycle” for educational programming languages?

Fri, 05/18/2018 - 07:00

As a longtime Smalltalk-er, I loved this piece: “The 50-year Gartner Hype Cycle for Smalltalk

Interesting how the hype cycle applies to Smalltalk:

  • Technology Trigger — the hype began with the famous 1981 BYTE cover and continued throughout the 1980s.
  • Peak of Inflated Expectations — in the 1990s, Smalltalk became the biggest OOP language after C++ and even IBM chose it as the centrepiece of their VisualAge enterprise initiative to replace COBOL.
  • Trough of Disillusionment — Java derailed Smalltalk by being: 1) free; and 2) Internet-ready. Free Squeak (1996) and Seaside web framework (2002) were not enough to save it.
  • Slope of Enlightenment — Pharo was released in 2008 and became the future of Smalltalk, thanks to its remarkable pace of evolution. We are still in this phase, which requires continuing and sustained advocacy.
  • Plateau of Productivity — we are waiting for this phase, perhaps in the next decade. I am sanguine.

Educational programming languages (or maybe just programming languages’ use in education) don’t seem to follow this curve at all.  Does a programming language ever “come back” once it has left classrooms?  Logo? Pascal?  Even if there’s a “Trough of Disillusionment” (e.g., when we realized just how hard C++ and Java are), we still see longterm use. Even if we later realize how good something was (e.g., Logo for integration into curriculum), it doesn’t come back.

I wonder what the similar curve looks like for programming languages in education.

Scale or Fail: Making national CS education work in Switzerland

Mon, 05/14/2018 - 07:00

Alex Repenning has the CACM Viewpoints Education column this month where he sets out a bold challenge — scale CS education to a national scale, or fail at making CS education work for all.

K–12 computer science Education (CSed) is an international challenge with different countries engaging in diverse strategies to reach systemic impact by broadening participation among students, teachers and the general population. For instance, the CS4All initiative in the U.S. and the Computing at School movement in the U.K. have scaled up CSed remarkably. While large successes with these kinds of initiatives have resulted in significant impact, it remains unclear how early impact becomes truly systemic. The main challenge preventing K–12 CSed to advance from teachers who are technology enthusiasts to pragmatists is perhaps best characterized by Crossing the Chasm, a notion anchored in the diffusion of innovation literature. This chasm appears to exist for CSed. It suggests it is difficult to move beyond early adopters of a new idea, such as K–12 CSed, to the early majority. Switzerland, a highly affluent, but in terms of K–12 CSed somewhat conservative country, is radically shifting its strategy to cross this chasm by introducing mandatory pre-service teacher computer science education starting at the elementary school level.

Three fundamental CSed stages are characterized by permutations of self-selected/all and students/teachers combinations. It took approximately 20 years to transition through these stages. Each stage is described here from a more general CSed perspective as well as my personal perspective.

Source: Scale or Fail

Feeling disadvantaged in CS courses at University of XXX

Fri, 05/11/2018 - 07:00

Even at Berkeley, the home of the great course emphasizing CS teaching for everyone, The Beauty and Joy of Computing, there are students who don’t feel that they belong in CS.  See the post quoted and linked below.

Of course, the story below is not about Berkeley.  This is about the slow pace of change, and how difficult it is to get whole CS departments to buy into the vision of “CS for All.”

CS 61A was a completely different story.

Last fall, I had the opportunity to work as a lab assistant for Data 8: “Foundations of Data Science,” and I couldn’t help but notice the difference in atmosphere between the students in Data 8 and my own experience in CS 61A.

Data 8 is one of the alternative courses offered for UC Berkeley students who are new programmers. Data 8 and CS 10: “The Beauty and Joy of Computing” are offered to students who want to test the waters of programming before jumping into 61A.

Data 8 uses Python, just like 61A. But the concepts are taught more slowly so new programmers can really understand how to use these concepts properly in their code.

Source: Column | Feeling disadvantaged in CS courses at UC Berkeley

Why are CS students so hard to nudge? A theory for why it’s so hard to promote a growth mindset in CS1

Mon, 05/07/2018 - 07:00

Pearson took a lot of heat recently for trying to improve students’ mindset in My Programming Lab.  I’m slightly worried about the ethics of their “embedded experiment.” I’m more worried that it didn’t work.

Titled “Embedding Research-Inspired Innovations in EdTech: An RCT of Social-Psychological Interventions, at Scale,” the study placed 9,000 students using MyLab Programming into three groups, each receiving different messages from the software as they attempted to solve questions. Some students received “growth-mindset messages,” while others received “anchoring of effect” messages. (A third control group received no messaging at all.) The intent was to see if such messages encouraged students to solve more problems. Neither the students nor the professors were ever informed of the experiment, raising concerns of consent.

The “growth mindset messages” emphasized that learning a skill is a lengthy process, cautioning students offering wrong answers not to expect immediate success. One example: “No one is born a great programmer. Success takes hours and hours of practice.” “Anchoring of effect” messages told students how much effort is required to solve problems, such as: “Some students tried this question 26 times! Don’t worry if it takes you a few tries to get it right.”

As Education Week reports, the interventions offered seemingly no benefit to the students. Students who received no special messages attempted to solve more problems (212) than students in either the growth-mindset (174) or anchoring groups (156). The researchers emphasized this could have been due any of a variety of factors, as the software is used differently in different schools.

Source: Pearson Embedded a ‘Social-Psychological’ Experiment in Students’ Educational Software [Updated]

Beth Simon and her colleagues tried a similar experiment, reported at ICER 2008.  They did get informed consent.  They tried a similar kind of “nudge” to get students to adopt a growth mindset.  It didn’t work for Beth et al., either.

I advised Kantwon Rogers’ MS in HCI project, where he tried to nudge CS1 students (both on-line and off-line) to have a greater sense of “belongingness” in CS.  Similar to these previous studies, he sent email prompts to students — some just encouraged study skills, and others promoted a sense that they belongs and could succeed in CS.  In almost all of his conditions, belongingness dropped.

What’s going on here?  Why are CS students so impervious to these prompts that have been successful in other settings?

I have a theory.  There’s a notion in the behavioral sciences literature that you get more success changing behavior or promoting attitudes by reducing barriers than by prompting for desired behavior or attitudes.  The analogy is to a large boulder that you want to move: You can push it and push it, or you can just dig away the dirt from the bottom.  The latter is likely to get the boulder rolling without as much effort.

Here’s my theory: Introductory CS classes have systemic issues that encourage a fixed mindset and discourage a sense of belongingThere are too many signals to students that they can’t succeed, that they can’t get better, and that they don’t belong — perhaps especially in times of rising enrollment. Mere nudges are not going to move the boulder.  We’re going to have to remove the barriers to belonging, self-efficacy, and the sense that students can succeed at CS.

 

Ever so slowly, diversity in computing jobs is improving: It’ll be equitable in a century

Fri, 05/04/2018 - 07:00

A great but sobering blog post from Code.org. Yes, computing is becoming more diverse, but at a disappointingly slow rate. Is it possible to go faster? Or is this just the pace at which we can change a field?

According to the Bureau of Labor Statistics, yes, but very slowly. We’ve analyzed the Current Population Survey data from the past few years to see how many people are employed in computing occupations, and the percentage of women, Black/African American, and Hispanic/Latino employees.

What did we find? There are about 5 million people employed in computing occupations, 24% of whom are women, and 15% of whom are Black/African American or Hispanic/Latino.

Since 2014, the trends in representation, although small, have been moving in the right direction — all three groups showed a tiny increase in representation. However, changes would need to accelerate significantly to reach meaningful societal balance in our lifetimes. If the current pace of increases continue, it would take over a century* until we saw balanced representation in computing careers.

Source: Is diversity in computing jobs improving? – Code.org – Medium

Pages