Archive for category Uncategorized

Student experiences of assessment: forthcoming conference presentation

A one-day Assessment Conference will take place at the Open University on 19 January 2016. The conference will feature morning presentations, a lunchtime poster session and an afternoon of workshops and is open to all staff. I will be presenting the ‘Key Findings from the 2015 OU Student Experience of Feedback, Assessment and Revision (SEFAR) Survey’ in the morning session and also two posters during the lunchtime session.

The SEFAR Project included a student questionnaire and follow-up telephone interviews were undertaken to learn more about student attitudes and experiences of four key elements of assessment at the Open University: TMAs (Tutor marked continuous assessment), examinations, EMAs (End of module assignments) and revising for examinations. The expectation was that student experiences of different aspects of the assessment process could be compared and contrasted. 281 students responded to the questionnaire and 13 volunteered and participated in 1-hour long interviews.

The survey included questions relating to ten themes and I hope to include data from the final project report (Cross, Whitelock and Mittelmeier, 2015) relating to most, if not all, these themes:

  • Support, Guidance and Use of Resources
  • Contribution to learning
  • Time allocation and time spent
  • Assessment preparation and clarity of instructions
  • Question quality and awarded marks
  • Tutor feedback, peer feedback and collaborative assessment
  • Strategic Learning
  • Student preference for EMAs or Examinations
  • End of module assessment as motivator for learning
  • Student support for alternate forms of TMA feedback

OU Staff can access the final project report on the institutional Scholarship Exchange: Cross, S., Whitelock, D. and Mittelmeier, J. (2015). Student Experience of Feedback, Assessment and Revision (SEFAR) Project: Final Report. August 2015. Open University.

Advertisements

Leave a comment

Distance learners use of handheld devices for study

The nature of distance learning and the constantly changing patterns in the ownership and use of handheld devices makes it essential to continually monitor and review how students are using their handheld devices for study. How do patterns of ownership, adoption and use by distance learning students differ? How are study habits and learning experiences changing and how do students perceive this? Does use in study features in student decisions to purchase devices?

These questions formed the focus of a 2013 student survey undertaken by the OU’s Pedagogy of EBooks Project. A report detailing these findings is now available as an IET Research and Innovation Report. The report describes the results of an undergraduate survey which asked students how they used e-readers, tablets and smartphones for study. This represents a snapshot of the rapidly changing interaction between technology and education, and highlights issues and opportunities for Higher Education in supporting student adoption of appropriate technologies and development of effective new methods of study.

The Pedagogy of Ebooks Project began in 2012 and seeks to document, analyse and explain the changing study practices of UK distance learning students as they employ, adapt and integrate the use of new portable digital devices such as e-book readers and tablets into their learning. Data from the most recent survey in 2014 Survey is currently being written up and a 2016 survey is anticipated.

A copy of the report is freely available: Cross, S., Sharples, M. and Healing, G. (2015) E-Pedagogy of Handheld Devices 2013 Surevy: Patterns of Student Use for Learning, IET Research and Innovation Report Series IET-2015-01, ISSN: 2058-9867. Available at:  http://proxima.iet.open.ac.uk/public/2015-01-RI-E-Pedagogy-of-handheld-devices-2013-survey.pd

 

Leave a comment

Collaborative Learning and Assessment at the Open University (CoLAB) Project

This spring myself, Denise and Graham began work on the Collaborative Learning and Assessment (CoLAb) project. This sought to provide strategic information about the practice and effectiveness of collaborative learning and assessment at the OU and followed on from a 2014 White Paper by Whitelock and Cross (2014).

The internal report is now published and investigates three specific aspects of computer supported collaborative assessment and learning:

1. What impact does collaborative activity have on student satisfaction and pass rates?
2. What are the characteristics of effective and less effective uses of collaborative learning and assessment?
3. Do students want to see more collaborative learning, group working and peer assessment in their modules?

The first section looks at the student view on collaboration as represented by data from a question in the university’s annual student experience questionnaire  (‘Taking part in collaborative activities with other students helped me to learn’). This part of the analysis uses the ‘not applicable/not used’ option to interrogate the data in new ways and to contrast student perceptions from different discipline areas. It also outlines findings from a recent IET pass rates project. One interesting findings was the variation in use and satisfaction with collaborative assessments between subject areas.

The second question was examined by analysing fifteen modules that used collaborative learning and assessment. Eight of the case study modules were of modules where a high proportion of students felt the collaborative activities had helped them learn whilst the other seven modules had lower levels of satisfaction. Section 3 reports key findings from the analysis of the module case studies and the challenges encountered when compiling this data.

The final question used data from the recent SEFAR student survey (Student Experience of Feedback, Assessment and Revision) (Cross, Whitelock & Mittelmeier, 2015). At the request of this project, the student survey included questions about collaboration and about peer assessment.

The report concludes with a discussion, key findings and recommendations.

The final report is available to OU staff at: Cross, S., Whitelock, D., and Healing, G. (2015) Collaborative Learning and Assessment (CoLAb) Project: Final Report. September 2015. The Open University.  If you are not at the OU but are interested in our findings then please do drop me a line.

Leave a comment

Defining the learning design problem space: creating a better learning solution

This presentation outlines a further development of the LATTICE model for the learning design problem space. Drop me an email for a copy of the presentation.

Cross, S. (2012). Defining the learning design problem space: creating a better learning solution, International Blended Learning Conference, 13-14 June 2012. Herefordshire, UK

Leave a comment

Lattice Framework (2010)

This post was first published 5 July 2010 (https://latestendeavour.wordpress.com/2010/07/05/the-lattice-model-for-designing-learning-defining-the-design-problem-space-and-guiding-the-design-solutions/)

Lattice Model of the Designed Learning Problem Space

Since January I’ve scaled back my time with the OU Learning Design Initiative, however, I am still involved with project managing our JISC funded project. In addition, I have continued to retain my personal interest in how visual conceptualisations and cartographies of learning could benefit the design process, and in how these can enrich, even fundamentally change, the student experience of learning.

On this page, I’ll address the first part of this interest – the design process. There are two related concerns; how to set-out and imagine a model of the design problem space (the first step to developing a solution), and how could this be used for a more practical tool to design learning. You will notice that I talk about ‘designing learning’ or ‘designed learning’ rather than ‘learning design’ or ‘Learning Design’ and this is intentional as should become evident.

The diagram below shows where I was in Summer 2010 in imagining what key dimensions exist in a design problem/solution space and how they link together (click on image to enlarge). I call this the Lattice model because of the inter-relational nature of the design elements. The purpose of laying this out as a network, rather than as a list or linear form, is to make explicit and explore the interconnectedness in designing learning. I would note that this is just a snap shot of a changing model.

The construction of this model has been framed by a number of observations and literatures. I’ll set out a few below but haven’t the space for an exhaustive account:

• Representations of learning designs tend to be concerned more with observable, performed elements of activity but we need to move much further beyond this. The sequence (or swim-lane) visualisation is a good example – a vertical line showing learning tasks with resources, support and sometimes learning outcomes connected to it. This layout was used in an early paper by Oliver and Hetherington in 2001 who showed the ‘three critical elements of learning design environments’ – learning task, learning resources and learning supports – with a basic notational system of rectangles, triangles and circles. Eight years later, these components are still important to learning design – for example Helen Beetham (2009) defines a learning activity as ‘a specific interaction of learner(s) with other(s) using specific tools and resources, orientated towards specific outcomes’ (marked A on the diagram below). Conole’s pedagogy profiler and the OU’s broader project to combine pedagogic and business visualisations of a course are examples of this moving forward with specific representations of aspects of designed and delivered learning. However, it remains uncertain how these descriptions connect together, how they help conceptualise the overall problem/solution space and how far they offer critical understanding. There are still many constraints and drivers to a design are undisclosed. A greater range of dimensions (elements relating to the design) are needed to fully map the design landscape.

• Use of the ‘sociocultural approach’ is an important perspective for Educational psychology in its attempts to theorise the role of culture and society. Although this is certainly not the only theoretical position from which to derive understanding (see later), given its key role any model should aim to accommodate (and yet also push?) this. In doing so we should acknowledge that the designer/teacher is not detached from the design process but implicated at a personal level with it. As the designer is both culturally and historically situated this makes their positionality and ‘intent’ (a term with echoes back to American pragmatism) important. Goodyear talks of the importance of representing intent, Strobel et al (2008) of capturing the design context and my experience at the OU working with Paul Clark and Andrew Brasher in trying to de-construct and visualise existing units revealed how important it is to know the thinking – and evidence supporting that thinking – ’behind’ a design (marked B below). Moving further, there is a need to situate learning as a social act – as Rogoff, for example, holds: learners engage in shared endeavours that are both structured and constituted by cultural and social norms (Rogoff, 1995). However, it difficult to find a language with which to label this dimension/box because traditions in social and cultural theory range widely on how this act could be interpreted and there is now an increasing interlacing between them. For now, I’ve borrowed from Giddens’ structuration theory the notion that there are structural rules and resources and added discourse to this principally as a nod to post-structuralism and hermeneutics (C). This label is therefore vague enough but drives us: to a more nuanced understanding of our students – be this a deeper psychological (Solomon, 2000), social, and cultural (Scheel & Branch, 1993) and the associated opportunity for and means of learning these enable or constrain; and to the intention of the designer and purpose of the activities (and ‘where’ they happen).

• From other design disciplines we learn the importance of first reflecting on and describing the design ‘problem’ space – from which the solution(s) will emerge (i.e. not just racing straight into developing the solution) – see earlier posts. Early IMS Learning Design had little to say about how one actually arrived at the design and whilst patterns outline aspects of the problem, the representation is designed to support someone looking for a solution rather than understanding the problem in the first place.

• The role of assessment in the design needs to be reconsidered – seeing it not as a product but as an activity itself. One option is to understand assessment as a process that ‘acts on’ student output (i.e. an object, action, spoken word, etc.). It would see this output as a resource produced for a specific audience that could be used again later in the learning activity or that could be transformed in to a new artefact/resource (i.e. through the activity of the teacher, student etc.). Irrespective of if, or how, this output is re-used in the learning activity, it will (or should) also constitute the evidence: to demonstrate the learning outcomes/objectives have been achieved (marked D below), to reveal other unanticipated outcomes (after Eisner, Polanyi, etc.) and to support other forms of evaluation (I’ve just jotted down Zakrzewski’s three on the diagram at present).

• There remain many other, often more pragmatic, perspectives to integrate in to the design problem space – thereby reflecting the heterogeneity of educational thought. For example: instructional designs interest in detailing what is to be learnt, learning tasks, student prior learning etc. (marked E); and the belief that a design should be built around key learning or conceptual ‘challenges’ (G). Clearly to appeal to a range of teachers the model should not be restricted to one individual theory of learning. This is in partly why I favour talking about ‘designing learning’ or ‘designed learning’ rather than ‘learning design.’

• Design of a unit of learning is influenced by practical constraints and conditions (H) defined at higher levels e.g. the block or the course (the issue of layers of design and fitting them together has been much discussed and something we’ve looked at in mapping courses), by other ‘evaluation’ demands from the institution or researchers (F), by previous units (for example, prior learning (I)) and by guidelines and training required of staff (J) . The temporal and multi-scale nature of the design problem needs representation (Grey-shaded boxes).

• Visual representation is a powerful means to communicate complex, non-linear, inter-connected relationships. It offers distinct advantages over linear descriptions and can support problem solving performance (for example, Baylor et al., 2005). This is supported by our small-scale studies at the OU (n=45-50) where we have found that a majority of staff said there were aspects of their work that would/do benefits from using visual representation and techniques (81%); they would like to improve their knowledge of visual representation and tools (81%); and that more use of visual representations (that show what is to be learnt and how) could help students better understand and plan their study’ (73%) (Cross et al. 2009)

As a practical design tool?

Whilst the model itself can provide a framework for imaging the problem/solution space, of interest to many will be how this model can be translated directly in to a more practical application. The screenshot below shows an early attempt in Excel. Here, each dimension becomes a zone (a box) in which information about the design (be this text, lists or labelled mind-mapped objects) can be inserted. In a typical scenario, the design will evolve and mature as information in each zone is augmented, refined, revised and inter-linked with that in other related zones.

We know from interviews and observation that the learning/activity design will often start with fragmented, vague and fuzzy ideas and develop incrementally, eventually resolving in to a finer precise design. We also know that designing may end when the design artefact is considered fit-enough-for-purpose (sufficient enough for planning and delivery) although the design itself may not be mature or fully worked through. And we know that a design-in-progress may simultaneously contain some elements that are vague and other that are quite detailed because there is often not the time, or necessity, or even the ability, to describe all relationships in a very precise way. Therefore the layout of a tool must be capable of showing design elements of multiple resolutions in a conceptual structure that scaffolds this incremental development and which makes clearer the level of design maturity that is required. S

o how might such a tool work? As I mentioned in the introduction, each zone represents a key dimension of the design problem. Like in the Lattice diagram, arrows link these zones showing how one zone/dimension of design affects others. The picture is one of interconnectivity.

Step 1: The scale or detail to be entered in to a zone is not specified and intentionally so. The designer/teacher may come to design problem with some initial ideas: some are likely to be vague (the outline of an activity), others more precise but partial (a resource to be used). These can be noted these down in the relevant zone, as a list or, if thinking more visually, as objects. Most zones are just a single blank spaces so the language and vocabulary used is not prescribed (the only exception being apart from the learning activity zone which must, by necessity, consist of a series of boxes to show the temporal sequence of tasks (the top-most being the first).

Step 2: Now, whilst some zones are taking shape, others are empty and this tells the designer they haven’t adequately mapped out the full problem space. It should be obvious that they can’t really go on to develop a solution without filling these in! So the designer begins a technique not dissimilar to a logic puzzle – using the relationships between zones to help them think about what to add to that and neighbours, ensuring the content of each zone is consistent with that of those it links to. For example, when given the specific set of learners (defined in the learner profile) attempt to learn the knowledge and understanding (defined in the what is to be learnt zone) with a view to meeting specific outcomes, what are the key learning challenges that may exist? Of course, filling in one zone may lead to a change or addition in another. And so the process continues.

Step 3: At some point the designer/teacher may feel that they have ‘enough’ to move on to planning and delivering the session. How one decides what ‘enough’ is will be somewhat arbitrary although invariably the greater the detail and thought that goes in to this, the better the outcome. If necessary, at a more micro-level the design could be refined further by organising the elements or objects in each zone and lining these directly with others in the zone or adjoining zones (creating networks of relationships). This may be useful for constructing knowledge maps, understanding how outcomes link together or organising resource. With practice this could be a useful practice to think through the design problem. The result should be a rich representation of the designed learning. It’s not straightforward, but I don’t think this complexity is a bad thing. Indeed, taken in whole, the educational literature presents a plethora of alternate, competing, and occasionally contradictory (at least in focus) theories, frameworks, practices and principles and it’s important to imagine the Lattice-work in which at least some of these may fit together. Of course, representing more detail and being clear how elements fit together should allow for substitutions and changes to be made more easily when reused by others. For example, to help see: how changing the resources may impact on the tasks set; or how changing the student social and cultural profile will impact the key learning challenges and learning space; or how changing the outputs from students may impact formative and summative assessments. Well, that’s it for the time being. I hope this discussion has provoked some thought and, whilst as with all my posts I reserve copy, intellectual and design rights where applicable, if you would like to use, develop, reproduce or just comment on any this do just drop me a line.

Selected References

Baylor, A.L., Lee, Y. and Nelson, D.W. (2005) Supporting Problem-solving Performance Through the Construction of Knowledge Maps, Journal of Interactive Learning Research, 16, 2.

Beetham, H. (2007) An approach to learning activity design, in Beetham, H. and Sharpe, R. (Eds.), Rethinking pedagogy for a digital age: designing and delivering e-learning. Oxford , Routledge.

Cross, S.J., Clarke, P. and Brasher, A. (2009) Preliminary findings from a series of staff surveys on perceptions, attitudes and practices of learning design, ALT-C September 2009. http://altc2009.alt.ac.uk/talks/6792

Rogoff, B., Radziszewska, B. and Masiello, T. (1995) Analysis of development processes in sociocultural activity in Martin, L., Nelson, K. and Tobach, E. Sociocultural psychology: Theory and practice of doing and knowing, Cambridge University Press

Scheel, N.P. & Brach, R.C. (1993) The role of conversation and culture in the systematic design of instruction, Educational Technology, 33, 8, p7-18.

Solomon, D.L. (2000) Towards a post-Moden Agenda in Instructional Technology. ETR&D, 48, 4, p5-20

Strobel, J., Lowerison, G., Cote, R., Abrami, P.C., and Bethel, E.C. (2008) ‘Modelling Learning Units by Capturing Context with IMS-LD, in Lockyer, L., Bennett, S., Agostinho, S. and Harper, B. (Eds.) Handbook of Research on Learning Design and Learning Objects, ISR

Leave a comment