Designing for Interaction

 

Andrew D. Sackville

Edge Hill College of Higher Education

sackvila@edgehill.ac.uk

 

ABSTRACT

A common challenge facing those educators involved in online learning revolves around finding ways of sustaining student activity in asynchronous learning environments. This paper seeks to reflect on the design and evaluation of a number of online programmes which have been run at Edge Hill over the last three years. It will relate the intentions of the programme designers, informed by a common model of pedagogy, to the actual patterns of interaction which took place on the programmes. Using evaluation and action research to inform and amend design has been a feature of the programmes; and the paper concludes that design and redesign are essential but often overlooked components in developing and delivering online learning opportunities.

Keywords

Design, pedagogy, interaction, evaluation

INTRODUCTION

This paper is based on my experience of designing and developing a number of supported online programmes at Edge Hill over the last three years. These have ranged from a six-week online staff development module in Designing and Supporting Online Learning, to an eleven month Postgraduate Certificate in Teaching and Learning in Clinical Practice (PGC) for senior health practitioners who have a teaching role in their work. It is the evaluation and research into this latter programme which has informed the present paper. There is a fuller account of the planning and development of this programme available. (Sackville 2000)

Like many "starting-up" online developers, I was aware in 1999 of an increasing number of narrative accounts of other academics developing and delivering online learning programmes. These accounts were informative and helpful – but often lead to papers which contained "tips" for potential developers. Similarly many of the texts available on online learning reflected this approach of giving helpful advice, but somehow not quite answering my own individual concerns about certain aspects of online design. I was also aware of more structured research into online learning – but again it was often difficult to apply these research findings to my own specific situation in planning and designing a new online programme.

My own pedagogic perspective and practice has been heavily influenced by a social constructivist position, so not surprisingly in designing my first online programme in 1999, I turned to my previous and current experience of both distance learning (using paper based materials) and my face-to-face learning facilitation. Working together with colleagues from two other academic institutions, we spent time sharing and developing our own model of pedagogy which would influence the development and design of the proposed programme. I discuss this in more detail below.

As academics concerned with teaching and learning in clinical practice, we were committed to continually evaluating all aspects of the programme, and to carrying out action research into our own design and delivery of the programme. Initially we relied on traditional methods of evaluation – continuous feedback to tutors, end of module questionnaires, etc. This was supplemented analysing the monitoring information which was available in the Virtual Learning Environment (VLE) we were using (WebCT). Finally we have been undertaking a closer examination of the actual online interaction which took place in the delivery of the programme; and have been using both manual coding and a software package (QSR NUD*IST) to analyse this. This research is still ongoing, so what is presented here is only a partial picture of the interaction.

DESIGN

Visscher-Voerman and her colleagues have identified four different perspectives on educational design and development which they have labelled instrumental, communicative, pragmatic and artistic (Visscher-Voerman et.al.1999). The instrumental perspective focuses on design following guidelines which have been derived from the goals and outcomes of the programme to be developed. In contrast the communicative perspective would see design as evolving from the interaction between the designers and the potential students of the finished programme. A pragmatic process would see designers creating the programme by quickly building, testing and revising several prototypes or earlier versions of the programme. Finally an artistic perspective would characterise designers as creating the programme in response to the specific situation in which they work. Visscher-Voerman recognises that designers may draw on more than one perspective; and this would accord with our experience where we were agreeing clear goals and outcomes for the programme; were discussing the programme as we designed it with health professionals in practice; were modifying the programme as we delivered it; and were designing a "new" programme on a "green-field" site.

Whilst the above paragraph characterises some of the mechanics of design, it is also necessary to recognise that designers often hold different philosophical approaches to curriculum design. Toohey has suggested that there is evidence that five different philosophical perspectives can be identified in course design in higher education. She labels these the traditional or discipline-based approach; the performance or systems-based approach; the cognitive approach; the experiential or personal relevance approach; and the socially critical approach. (Toohey 1999). Toohey argues that these perspectives "surface in the language that is used to describe educational goals, in the choice of what is to be taught, in the design of teaching spaces, in the allocation of time within the course, in decisions about assessment." (Toohey op.cit. p44.) In our case, the design team were influenced by the cognitive, experiential and socially critical approaches (see below).

However, after reviewing the design literature, it became apparent that "learning cannot be designed; it can only be designed-for; that is – facilitated or frustrated" (Wenger 1998 p229). The developer has to design a social infrastructure that fosters learning. As a recent OTIS report reminds developers – you can design in "pull" factors – extending invitations to participate; you can design in "push" factors – building in requirements to participate; and you can "avoid" certain features which dampen or prohibit participation (OTIS 2001). But Wenger identifies the tension which exists within design between participation and reification – between rigidity and adaptability. As he so succinctly puts it "there is an inherent uncertainty between design and its realisation in practice, since practice is not the result of design but rather a response to it" (Wenger 1998 p233).

PEDAGOGIC APPROACHES

The design team on the PGC course decided to work out its own pedagogic model which would guide the design of the programme. Since all four members of the team came from an educational background they agreed a common understanding of social constructivism, which offered a metaphor of people in conversation constructing a shared version of the world. Jarvis has argued that "a central (constructivist) method is ‘real task’ which includes discourse and exploration, talking and listening, questions, argument, speculation and sharing, but in which domination is replaced by reciprocity and co-operation" (Jarvis 1998 p73)

The nature of the potential participants on the programme and the subject matter (teaching and learning) meant that we were keen to build on the participants’ experience of teaching, we wanted assist them in reflecting on their existing practice, to share their experiences and concerns, to explore alternative forms of teaching and learning facilitation; and to link their practice into both theory and extant research. We were very conscious that most of the participants would have been taught (and probably taught others) in a traditional didactic fashion; and would probably expect a corpus of non-contentious material which they could adopt. Most participants would be comparative novices at engaging in online learning.

We were conscious that we were wanting to develop a learning community which brought together health professionals from a number of different professions which rarely engaged in joint training, or sharing of ideas (doctors, dentists, nurses, professions allied to medicine). We recognised that for this to happen we needed to develop a climate of trust within which learning could take place.

At this stage in the development of the programme we took a crucial decision to adopt a supported online approach to learning. During the eleven month period of the programme, we decided to build-in five face-to-face contact days, we allocated a personal tutor to each participants (who could be contacted via e-mail, telephone, visit etc), we adopted a "set text" around which the online material would be structured, and we agreed a set of regular tasks which we would ask the participants to undertake online. Our pedagogic model was now taking shape, and was expressed in diagrammatic form:

Figure 1. The Pedagogic Model

DIMENSIONS OF INTERACTION

In a seminal article Moore identified three categories of interaction – interaction with content, interaction with instructors and interaction among students. (Moore 1989). As we started to design the programme, we added another two categories of interaction, and recognised that there were variations in two of the categories that Moore had suggested. Our own conception of the dimensions of interaction is portrayed in Figure 2:

Figure 2. Dimensions of Interactivity

 

We have added the learner – technology dimension to convey the necessity of designing-in activities to assist the learner in utilising the medium of learning to their best advantage. This dimension has an important variant, namely tutor – technology. We have also added participants – practice community as a new dimension of activity, since the learning needs to be linked into wider professional practice – which not only supports the potential learning, but which may in turn be influenced by the learning which takes place on the programme.

The construction of "activities" or "tasks" that would promote the various dimensions of interaction became a crucial component of the design process. The use of activities in both face-to-face teaching and in paper-based distance education has a long tradition, and the design team drew on these traditions in constructing the activities for the programme. However the use of a VLE provided a number of potential advantages for online activities over the use of activities in either face-to-face or paper-based distance programmes. These included the opportunity for reflection in asynchronous discussions and the opportunity to archive, and build-up records of discussions – both missing from face-to-face discussions; and the opportunity for group interaction on activities, and the opportunity for tutor response at crucial points in the discussion – both missing in paper-based education.

Davis and Denning have exemplified their design approach as one which "depends on online collaborative construction of knowledge, drawing on students’ past experiences and new understanding gained on the course" (Davis & Denning 2001 p65), and our activities fell into a similar pattern of two broad categories – one which asked students to reflect on their own previous or present experiences, and the other which asked them to reflect on theoretical perspectives and research findings.

INTERACTIVITY IN PRACTICE – GENERAL EVIDENCE

By far the most accessible data which we collected on participation in the online programmes relates to the volume of e-mail and bulletin board discussion. As others have commented, not only were we gratified by the total volume of participation, but we quickly noted temporal variations in the interactivity which was taking place on the programme. Not surprisingly the early tasks in each of the three modules attracted more discussion than later tasks. This may partly be due to the pattern of assessment we introduced which demanded a significant amount of time towards the end of each module. It may partly be due to initial enthusiasm damping down as the programme progressed. Evaluation data from participants suggests that both these factors played a part, as did the fact that they felt they were not being "rewarded" in any way for participating in online discussions. This feedback has been so powerful from the first two cohorts that we have amended the assessment pattern for the third cohort to include a requirement for participants to participate in at least 70% of the online tasks; and to reflect on their contribution to one of the tasks, when the discussion on that task has concluded. This is a good example of design responding to the evaluations of participants where we have attempted to remain true to the aims of the course that assessment will not be based on the volume of response – but in the quality of reflection those responses prompt.

The programme runs from January to November in the calendar year, and we have not noticed any great decline in activity over the traditional summer holiday period, although there is a decline in activity in the traditional "change-over" month of September, when other aspects of practice occupy the health professions. Data on the access times to materials in the first two cohorts revealed the majority accessing the material either during early morning or lunchtime working hours; and in the late evening (after 9.00pm). Significant use was made of the site on public holidays and in the early hours of the morning. The large range of access times confirms our design decision to concentrate on asynchronous discussion boards, rather than synchronous chat rooms.

The other major conclusion revealed by evaluation and monitoring data relates to the differential access to the site, and engagement in activities, by different participants. In the first cohort of 20 participants it was easy to identify five separate types of participants. There were firstly three who we labelled "express trains" – often the first to respond to activities, who steamed ahead and who were almost "hooked" on the technology. Secondly there were five "reliable and consistent" participants, who maintained a steady rate of participation throughout the programme, who often initiated new topics for debate, and who genuinely engaged in discussion with each other. These five were complemented by four "slow-starters" who picked up momentum as the programme continued; and by four slowing-downers" who lost their initial momentum some three months into the programme. The fifth group of four were the witness learners or non-participants; who generally downloaded all the material and rarely appeared online. It is interesting to note that this last group of four, included three participants who gave very positive end-of-programme evaluations, and who have been at the forefront of the movement to urge their colleagues to join the programme!

This research also showed us that some participants were still having difficulty with aspects of the learner – technology interaction well into the programme. As a result of this evaluation, we designed a new two and a half weeks mini-module on interacting with the technology which we now deliver in advance of starting the first module. This was followed by a design decision to make a number of the early activities "private" between participant and tutor, in the hope of encouraging the less confident to engage in the activities. These design decisions are reflected in the data on participation rates of the second cohort of 24 participants. In this cohort the group of "slow starters" has vanished, and the group of witness learners has dropped to two. There was still a small group of three "slowing-downers" who lost initial momentum; but there was only one "express-train". The vast majority of the second cohort - 18 - fell into the ideal "reliable and consistent" group. However a new phenomena came to light – the different speed at which participants worked through activities. Although all activities had suggested start dates, the "express train" totally ignored these and raced ahead, whilst the "reliable and consistent" group slit into two unequal halves. The majority followed the recommended starting dates, but a significant minority, 7 participants, followed on at their own pace. These participants, although responding to each activity, tended to make statements and reflect on their own learning, rather than engage in discussion and joint learning, which was a feature of the majority of the group.

In the third run through this year we have introduced a design device to try to encourage all participants to move through the programme at a consistent rate. We have done this by designating a three-week "envelope" during which an activity will be open for discussion and debate. We hope this will curb any potential "express trains" and will encourage other participants not to get left behind. Of course this then poses a challenge to those participants who want to work through the material at their own pace. Whilst this would fit in with part of the philosophy of the design time, it tends to contradict the emphasis that the design team has placed on developing a community of learners. We have decided to place this "up-front" and therefore arguably the programme may not be the best for those who want to pace themselves. The structure we provide does provide some flexibility; but this flexibility is bounded by the aims of supporting a learning community.

INTERACTIVITY IN PRACTICE – TYPES OF INTERACTION WHICH OCCURRED

Whilst the above interactivity is comparatively easy to measure, and explanation for variations can often be garnered from traditional evaluation questionnaires, it is far more difficult and time-consuming to explore the actual interactions which take place in an online programme. McKenzie and Murphy have reviewed a number of attempts which have been made to categorise the types of electronic interaction which occur on an online programme. (McKenzie & Murphy 2000). They provide two useful tables which summarise the classifications they used in their own transcript analysis. In the first table they look at the dimensions of participation (level of participation, structure and type of participation) and interactivity (explicit interaction, implicit interaction and independent statement). In the second table they assess the cognitive and metacognitive aspects of interaction. This involves coding responses as cognitive – demonstrating critical thinking (clarification, inference, judgement and strategy) or information processing – or metacognitive – demonstrating knowledge or skills (evaluation, planning, regulation or self-awareness). We have been using McKenzie and Murphy’s first table to examine dimensions of participation and interactivity.

However we have turned to another recent American study to find a set of categories which we are now using in our coding of the text of online interactions in our programme. (Swan 2001). Swan has followed Rourke (2001) in dividing interactions into three broad categories. These are best presented in tabular form:

Affective, Cohesive and Interactive Indicators (after Swan 2001)

Affective indicators:  
Indicator Definition
Paralanguage features of text outside formal syntax used to convey emotion.
Emotion use of descriptive words that indicate feelings
Value expressing personal values, beliefs and attitudes
Humour use of humour teasing, cajoling, irony, sarcasm
Self-disclosure

sharing personal information, expressing vulnerability

   
Cohesive indicators:  
Indicator Definition
Greetings & salutations greetings, closures
Vocatives addressing classmates by name
Group reference referring to the group as "we", "us", "our"
Social sharing phatics, sharing information unrelated to the course
Course reflection reflection on the course itself
   
Interactive indicators:  
Indicator Definition
Acknowledgement referring directly to the content of others’ messages
Agreement/disagreement expressing agreement or disagreement with others
Approval expressing approval, offering praise, encouragement
Invitation asking questions or otherwise inviting response
Personal advice offering specific advice to classmates.

We are at present analysing the interactions of the first two cohorts using these indicators. Initially we were hand-coding these, but we are now using OSR NUD*IST to do this, although we are finding the process of agreeing what falls in which indicator to be a time-consuming process. What is becoming apparent however is that we are still not always capturing all the aspects of the interaction which we know are contained within the interaction.

The coding process we are using is capturing different types of interaction – but it is not able to make a judgement about the significance of the interaction at a particular time in an ongoing discussion. For example, we can recognise the verbose contributor who makes a large number of contributions – but who does not necessarily "discuss" ideas. By contrast we recognise a participant who makes only one contribution to a debate – but this contribution may be crucial to the future pattern of debate. Similarly some participants respond to a request for assistance from a colleague with general words of encouragement; whilst others make very specific practical suggestions of ways of handling a particularly tricky teaching situation. When we turn to coding the use of references in the online discussion, a judgement again has to be made as to which references are fairly standard for the programme, included in the tutor-provided material, and which references are "new" – discovered by the participants in their wider reading and discovery learning. Further examples could be given of how students actually use personal experience, and how they use exploratory learning.

We seem to be discovering that there is an element of qualitative judgement to be made about the interactions which only the participating tutor can judge. Thus although we can use "independent" researchers and coders to help us analyse the first two layers of interaction – quantitative data, and "generic" qualitative data – the deeper qualitative data can only be quarried by the programme tutors, by a process of reflection on the totality of interactions. This argument for supplementing quantitative data and generic qualitative data, with the interpretations of the tutors as participants does of course tie in with the constructivist perspective adopted by the programme team. Even here however there remain methodological problems of a tutor ascribing their perception of the significance of the interaction, without taking into account the participant’s ascription of the meaning originally intended. We are now trying to capture this by asking participants to review the archived discussion they have engaged in on a particular topic, and then asking them to reflect on the significance of their own contribution(s) within that discussion.

INTERACTIVITY IN PRACTICE – BROADER ASPECTS OF INTERACTION

Whilst the majority of interactions on the programme have been in response to a series of carefully constructed tasks or activities, participants have always had the freedom to initiate their own discussions about topics which they have identified as being important. An analysis of the interaction of the first two cohorts on the PGC programme has shown that such interactions have fallen into three categories:

Requests for assistance. Generally these have been in relation to a dearth of material on a particular topic, or to appeals for ideas about how to conduct a particular teaching session.

Notice of discoveries. When participants have found a web-site, electronic journal or other source of material which they rate highly, they have been keen to share these discoveries with their peers.

Policing their own interactions. Both cohorts became increasingly frustrated with the ‘express trains’ on their programme. They discussed how they felt intimidated by the speedy workers, with the result that in both programmes the ‘transgressors’ apologised to the discussion group, and modified their behaviour. Participants also challenged others about the use of stereotypes and sexist language.

When planning and designing the course the design team thought carefully about the establishment of a chat board or an electronic common-room for participants where tutors would not be admitted. A review of the literature suggested that many online programmes created such a ‘space’. However we were not convinced that one should separate the social and the academic interaction in this way. In the event we found that some participants did incorporate social comments within their academic postings. Where other participants thought such social interaction was distracting they again intervened and policed their colleagues. The evaluation of the programme has revealed that the two cohorts of participants prefer this system, and the design team believe that this may reflect the fact that the participants on this programme are well-established professionals, for whom chat provides little attraction. The one group of participants who valued using the discussion board for ‘chat’ were the dentists, possibly reflecting their greater professional isolation than colleagues who worked in hospitals.

So far there has been no mention of the interaction of tutors with other participants on the programme. Deriving from a constructivist model, the design team envisaged the tutor as engaging in contingent teaching – intervening at crucial points to introduce new ideas, or to perhaps clarify issues where participants were clearly becoming bogged-down in a particular argument. Alongside the contingent teaching role, tutors were encouraged to support their tutorial group in one-to-one work, and to handle any more general queries from participants. This expectation of the tutor’s role was shared with participants in the first online module and in the first face-to-face session. Despite this the first cohort found this conception of a tutor’s role difficult to understand initially, and there was a great deal of discussion about the ‘vanishing tutors’! However it is significant that as the programme developed, participants recognised the roles, and reflected how contingent teaching could be used effectively on their own teaching and training activities. This modelling of the role of tutor/ educator was evaluated as one of the positive features of the programme, which the design team later incorporated into a six-week online staff development programme in Designing and Supporting Online Learning – where other lecturers within the institution experienced the programme from the students’ perspective, reflecting on the role being played by the programme tutors in providing the online experience for them as students.

FEEDING BACK THE EVALUATION AND RESEARCH INTO FUTURE DESIGN AND DELIVERY

The wealth of data collected by evaluation, by quantitative research, by qualitative research, and by tutor reflection and debate provides a rich seam of material for action researchers and for the design team involved in further developing online programmes within the higher education institution. Although the design team has continued to modify existing online programmes in the light of evaluation and research finding there are still a number of issues where we are exploring the relationship between design and interaction.

Our evidence suggests that our programme design has been highly successful in engaging participants in reflective activities. This involves reflection on theories, research data and journal articles as well as their own and their peers’ experience. We have found it more difficult to engage large groups in online discussion. Online discussion has taken place between individuals, between tutors and participants, and between small groups of participants. In large group discussions we have found a tendency for individuals to make statements, rather than engage in debate. These have been critical, reflective statements, but discussion has rarely been sustained beyond six responses. In evaluation a number of participants have mentioned that they have not joined in debate, because the point they wanted to make has already been posted. They have not grasped that signifying agreement is as much a part of debate as stating disagreement. Wallace has recently asked – "do students really want to interact?" and the design team has been returning to more basic questions such as this (Wallace 2001). The model of pedagogic design we have posits interaction as a positive aim which we are trying to achieve – however the evidence both from our online programme and from related research on face-to-face programmes suggests this aspiration is hard to achieve.

Closely linked to this issue, has been the design teams concern about non-participants. In the first cohort on the PGC programme a number of participants themselves became quite irate about the non-participation online of some of their colleagues. After a lengthy discussion about ways in which they should be forced to participate, one of the ‘non-participants’ came online to point out the contradiction between their insisting on full participation, and the material the programme was focusing on at that particular time – differences in learning style. This kick-started one of the most lively debates about how far any one programme or short course could be designed to attract individuals with different learning styles, and the value of engaging in negotiating learning agreements at the start of programmes which set out clear expectations of the type of interactions that were envisaged for the programme. This in turn caused the design team to reflect on how far our supported online learning programme could be modified in any way to facilitate learning by individuals with different learning styles.

Our evidence has also shown that participation in discussion has continued long after the programme has formally ended. The first cohort of the PGC who formally ended their studies in November 2000 were still discussing and debating issues until June 2001, when we formally closed down the programme site. Similarly the 2001 cohort who formally concluded in November 2001 is still debating issues in February 2002. We clearly have created a continuing learning community, which we are now seeking to maintain with the employers’ support, using Postgraduate Medical and Dental Education funds. The programme is also being extended to a supported online learning Masters degree in Clinical Education.

CONCLUSIONS

It would have been tempting to conclude this paper with a list of "hints" for designers, but this has been resisted since it follows from our constructivist perspective that designers, like other learners, benefit from contingent teaching when they have the need to move into developing online learning. This paper has indicated the theoretical, pedagogic principles which underlie the design of a particular ‘family’ of courses in one higher education institution, and it has used the ‘parent’ course – a PGC in Teaching and Learning in Clinical Practice to illustrate this. It has explored the extent to which a design team has been able to apply these in practice and it has indicated how a variety of evaluation and research techniques can provide a rich wealth of data to inform future development and redesign. One underlying imperative has informed the paper – that is the need to return to first principles when designing online programmes. The clarification of aims and objectives, the model of pedagogy being adopted and a clear perception of the forms of interaction being designed into a programme are essential in establishing a clear design brief. This means that we have to challenge the idea that simply transferring masses of material and lecture notes onto the web will provide a satisfactory online learning experience for participants. We may need to recognise that whilst it is relatively easy to develop online programmes on green-field sites, we will have to work hard to clear away the residual pollution before we can build a quality educational edifice on a brown-field site.

REFERENCES

Davis,M. and Denning,K. (2001) Almost as helpful as good theory: some conceptual possibilities for

the online classroom Association for Learning Technology Journal 9,2, 64-75.

Jarvis,P., Holford,J. and Griffen,C. (1998) The Theory and Practice of Learning Kogan Page,

London.

McKenzie,W. and Murphy,D. (2000) "I hope this goes somewhere"; Evaluation of an online discussion group. Australian Journal of Educational Technology, 16, 3, 239-257.

Moore,M. (1989) Three types of interaction American Journal of Distance Education 3, 2, 1-6.

OTIS (2001) Online Tutoring e-book (edited C.Higginson) Chapter 3 Building an Online Community.

Online Tutoring Skills Project.

Sackville,A. (2000) Asynchronous Online Learning – A Case Study

www.edgehill.ac.uk/tld/sidebar/tldstaff/sackvila.html

Swan,K. (2001) Building Learning Communities in Online Courses: The Importance of Interaction.

Unpublished paper presented to the International Conference on Online Learning, Orlando,

Florida. November 2001.

Toohey,S. (1999) Designing Courses for Higher Education. Open University Press, Buckingham.

Visscher-Voerman,I., Gustafson,K. and Plomp,T. (1999) Education Design and Development: An

Overview in Van den Akker,J. et al (eds) Design Approaches and Tools in Education and

Training Kluwer, Amsterdam.

Wallace,L. (2001) Do students really want to interact? In Murphy,D., Wallace,R. and Webb,G. (eds)

Online Learning and Teaching with Technology Kogan Page, London

Wenger,E. (1998) Communities of Practice Cambridge University Press, Cambridge.