You are here: Home >

     Search  Print Send to a friend
  Home   About ELM   Contribute    Subscribe to Newsletter  Contact us    

Section:
VLE:
 
 
Featured Articles


Using Blackboard statistics as part of your annual self-assessment review

 

As another academic year begins, if you are anything like me then your thoughts are turning very rapidly to the ‘P’ word. Preparation!! There are a hundred and one things that you need to consider in preparing for the new year. Am I happy with my scheme of work, have I got all the materials/content/resources etc that I need for the coming hours/days/weeks/terms/semesters (or whatever time-period you work to) ready, is my Blackboard course site in need of freshening up, have my new students been enrolled onto my Blackboard course sites and so on? At the same time, now that all the summer exam results are in, you may also be at the point where you can finally finish the self-assessment review for your subject area with a view to improving the quality of the service that you provide to your learners. The purpose of this article is to give you a suggestion on how you can utilise the course level statistics generated by the Blackboard database to help in this process. Before I go on to describe how I am developing a means of measuring the relationship between Blackboard use and student learning, I would like to give a word of thanks to Rob Archer from North Tyneside College, who originally suggested the idea of using an Excel radar chart to map Blackboard use against retention and achievement. From Rob’s initial suggestion, I developed the definitions for the metrics that I now use to provide data of Blackboard use.

So, what is involved in measuring the relationship between Blackboard use and student learning? Well, first of all, you need to establish what metrics you are going to use. The ones that I use are shown in the table below, and reflect a combination of the ‘traditional’ Further Education measures of effectiveness – Retention and Achievement rates, and a series of measures of different aspects of Blackboard use. The table contains not only the metrics used, but also definitions of how the measurements are taken and how I defined baselines against which each individual measurement is compared.

 

What is being measured

How it is measured
(the metric)

Definition of that metric

How the baseline metric was derived

 

 

Student Learning

Achievement Rate

% Pass rate for the course/qualification

From National Benchmark figures

Retention rate

% of learners completing the course/qualification

From National Benchmark figures

 

 

Blackboard Use

Content Index

The number of files containing learning materials on a Blackboard course site.

Assumed 1 piece of learning material for each topic area on the subject syllabus.

 

Blackboard Course Hits

Hits on Main pages of a Blackboard Course site as recorded by using the ‘Overall Summary of Usage’ filter in the Course Statistics area.

 

This figure is then reduced by a factor of 100 to bring it onto the same scale as achievement and retention rates.

Assumed it would take 4 hits for a student to access a piece of learning content from time they access course site. Also assumed they needed to access 1 piece of learning content per week (equivalent to being given one handout per-week pre-Blackboard). Resulting figure reduced by a factor of 100 to bring it onto the same scale as achievement and retention rates.

Active Students

% of students enrolled on a Blackboard course site who have accessed learning material

Used same figure as for retention rate, based on the assumption that, if 90% of students remain on a course, then 90% of them would have accessed Blackboard as part of their course.

Online Assessment

% of students achieving an average score of 40% or more across all online assessments recorded in the Blackboard online gradebook

Assumed that 40% was the pass mark for the course/qualification

 

When the data is entered into an Excel spreadsheet and a radar diagram is produced using Excel’s chart wizard, the results look like this:



This gives a good idea of the relationship between the provision of resources and access to and assessment of those resources available via Blackboard and the traditional retention and achievement rates that FE success is judged by. Of course, no direct cause and effect relationship can, or should be, inferred since use/non-use of Blackboard is only one of many factors that impact on a student’s achievement.

Building up this picture over a number of years gives a useful view of the trend of this relationship over time, and is, perhaps the most valuable way of using this information.

This is shown in the Excel radar chart below:

 

However, as you can see, this can get very confusing and difficult to read after a few years, with separate lines for the baseline and the actual data for each year.

An alternative to this is to map the actual data against baseline data over a three-year period. In this model the actual data for the first year (2001-02) is used as the baseline. This would then yield a radar chart like this:

When the data for 2004-05 is finalised, the actual figures for 2002-03 would be used as the baseline and a new radar chart would be generated. This means that current data is always being compared with that of the last two years, an approach which has resonance with the inspection cycle of OFSTED and ALI, and, I would suggest, has credibility for management.

Having shown how I measure the relationship between Blackboard use and student success, I would like to raise some issues about why I do this.

Firstly, Blackboard represents a significant financial investment for most institutions and Finance Directors will, quite rightly, focus on the return on investment (ROI). If Blackboard is having little, or even negative impact on student success, then what justification is there in continuing to pay the annual licence? The measurements above reveal, albeit in a somewhat crude way, what the ROI re Blackboard is.

Secondly, and perhaps more importantly from an operational point of view, conducting such an analysis can raise important pedagogical issues which should be addressed.

For example, if the analysis reveals that the use of Blackboard is going up year-on-year, but student success is not, then I would have to question the effectiveness of both the process of making content available to students via Blackboard and the type of content made available to students. In other words I would need to consider such questions as: Is the content suitable for on-line delivery? Are the students motivated to learn on-line? Do they have the requisite study skills to make the most of the content? Do they have the requisite IT skills to access the content?

Similarly, I could examine the relationship between the students achievements at qualification level with their performance at the on-line assessments carried out during their course. If there is an improvement in the former and in the latter, should I consider increasing the amount of on-line assessment? If there is a decline in the former but an increase in the latter, is the form of the on-line assessment too removed from that of the qualification assessment types?

I am sure that you could add at least a dozen other questions to this list, so I will not labour the point. Suffice it to say that the intention of this article was to raise some ideas about how the use of the Blackboard reporting functions could be used to inform teaching. After all, when all is said and done, Blackboard is, and from my point-of-view always will be, primarily a teaching tool and as such, it is important that we take stock of the tools that we use in our everyday teaching practice to ensure that we are providing learners with a service that is geared towards academic excellence.

Author: Merv Stapleton

03 October 2004

VLE: Blackboard

 


Up

| Disclaimer | Copyright | Privacy Policy | Terms of Use | Design by GreenDigit