Compass Points is published quarterly in March, June, September and December. The Surveying Group is a Special Interest Group of the British Cave Research Association. Information sheets about the CSG are available. Please send an SAE or Post Office International Reply Coupon.
NOTES FOR CONTRIBUTORS
Articles can be on paper, but the preferred format is ASCII text files with paragraph breaks. If articles are particularly technical (i.e. contain lots of sums) then Latex or Microsoft Word documents (up to version 7.0) are probably best. We are able to cope with most common PC word processor formats. We are able to accept disks from other machines, but please check first. We can accept most common graphics formats, but vector graphic formats are much preferred to bit-mapped formats for diagrams. Photographs should be prints, or well-scanned photos supplied in any common bitmap format. It is the responsibility of contributing authors to clear copyright and acknowledgement matters for any material previously published elsewhere.
COMPASS POINTS EDITOR
Wookey, 734 Newmarket Road, CAMBRIDGE, CB5 8RS. Tel: 01223 504881
SUBSCRIPTION & ENQUIRIES
Andrew Atkinson, 31 Priory Ave, Westbury-on-Trym, BRISTOL, BS9 4BZ.
Tel: 0117 962 3495
The CAVE SURVEYING GROUP of the BCRA. BCRA is a registered charity.
OBJECTIVES OF THE GROUP
The group aims, by means of a regular Journal, other publications and meetings, to disseminate information about, and develop new techniques for, cave surveying.
Copyright (c) BCRA 1999. The BCRA owns the copyright in the layout of this publication. Copyright in the text, photographs and drawings resides with the authors unless otherwise stated. No material may be copied without the permission of the copyright owners. Opinions expressed in this magazine are those of the authors, and are not necessarily endorsed by the editor, nor by the BCRA.
ANNUAL SUBSCRIPTION RATES
Publication U.K. Europe (air) & World:
World: surface Airmail
Compass Points 4.50 6.00 8.00
These rates apply regardless of whether you are a member of the BCRA. Actual "membership" of the Group is only available to BCRA members, to whom it is free. You can join the BCRA for as little as £3.00 - details from BCRA administrator. Send subscriptions to the CSG secretary. Cheques should be drawn on a UK bank and payable to BCRA Cave Surveying Group. Eurocheques and International Girobank payments are acceptable. At your own risk you may send UK banknotes or US$ (add 20% to current exchange rate and check you don't have obsolete UK banknotes). Failing this your bank can "wire" direct to our bank or you can pay by credit card, if overseas. In both these cases we have to pay a commission and would appreciate it if you could add extra to cover this.
DATA PROTECTION ACT (1984)
Exemption from registration under the Act is claimed under the provision for mailing lists (exemption 6). This requires that consent is obtained for storage of the data, and for each disclosure. Subscribers' names and addresses will be stored on computer and disclosed in an address list, available to subscribers. You must inform us if you do not consent to this.
COMPASS POINTS LOGO
courtesy of Doug Dotson, Speleotechnologies.
Published issues are accessible on the Web at:
The CSG Web pages are reached via http://www.caves.org.uk/csg/
CAVE SURVEYING MAILING LIST
The CSG now runs a mailing list for cave surveyors around the world. To join send a message containing the word 'subscribe' in the body text to email@example.com
Editorial 1Forthcoming Events 3Snippets 3Silva LMG plastic compass review - Anthony DayCave Survey Maths Mailing List - WookeyCyrax Laser Surveyor -Al JagnowSoftware Updates 4Walls (Version 2, B4)CompassPress roundup 4Compass and Tape issue 45Letters 5Making your own waterproof paper - Martin SlukaHow Common Are Blunders In Cave Survey Data? 6Larry FishLarry uses the blunder detection in compass to determine approximate numbers of blunders in a selection of datasets. He then adjusts the assumed instrument errors and re-runs the tests on some representative datasets. The results show some fascinating things about real-world instrument errors, and potential for better data analysis tools.Cave Surveying Group Spring Field Meet Compass Trial 8Ben CooperAnalysis of the compass experiment data from the SWCC course at the Spring field meet.Cave Survey Data Archiving ProposalAndrew AtkinsonThe current state of the proposal to set up a central survey data archive.
It is an historical accident that we have a Compass Points with a cover date of September. Unfortunately, this always interacts badly with Summer Holidays/Expedition, so it is invariably late. This year is no exception, and is probably a record for the latest CP in five years! Oh well, I expect you all survived the wait.
We have some interesting stuff on the frequency of blunders in real datasets, and more data on just how accurate our instruments really are. The SWCC compass course experiments are an ongoing series. It is becoming clear that we need a lot of data to get convincing answers from these tests. However, hopefully a few more field meets will get us that data.
I hope those of you who thought there was too much about cave survey software in Compass Points are finding the mix a bit better these days. Don't forget to tell someone if you still aren't happy. Hope to see you at the field meet just after (before?!) this issue hits your floor.
The next CSG field meet is at Bull Pot Farm, on the weekend of 2nd-3rd October 1999. Cost will be £3 per night, unless you are a member of the Red Rose Cave and Pothole Club. The meet is open to anyone with an interest in cave surveying, both beginners and experts. It will be held in conjunction with the Cave Radio Group, so the opportunity to 'do' some cave electronics will also be there.
Sat 2nd October
Sun 3rd October
Silva LMG Plastic Compass
When looking to equip myself with a compass for use on a Summer expedition on the cheap, the Silva SM 360 LMG looked ideal. This compass has the same capsule as all the other Silva compasses, but comes in a plastic (rather than aluminium) case with a rubber cover for about £21 including VAT. A normal Suunto aluminium-bodied compass + rubber cover is £74 (and the Silva equivalent is £40). Even a Silva capsule alone is more expensive than this compass!
Initial impressions were very favourable. The scale is very clear, and doesn't seem to mist up as much as the aluminium- bodied clinometer we were using when surveying in moderately wet caves (i.e. with running water in them.) I suspect this is because the case has a small hole on each side allowing condensation to escape. Of course in a very wet cave they could just as easily let a flood of water in, though they could easily be filled. Another potential problem is that the lens is viewed through a small slit in the case, which could potentially fill up with mud and be a pain to clean out without scratching the lens or taking the case apart. However, this was not a problem in the caves we were working in.
However, the major problem we encountered was that after only three trips, the capsule became detached from the case and was free to rotate. After taking it apart we found that the capsule was fastened onto the case by three spots of glue round its edge. A more robust solution might be to score the case and the base of the capsule and apply glue to the whole of the contact area (I haven't done this yet.) However there are no guidelines with which to align the capsule (unlike the aluminium-bodied ones where the capsule is positioned by little glued-on tabs that go into slots, and a grub-screw to hold it securely), so calibrating it properly is going to be a real pain.
Overall, the evidence of my brief experience is that this compass is not robust enough for use in caves, and represents false economy for anything other than a very short term project.
An interesting item from the Speleonics Mailing List:
Al Jagnow <firstname.lastname@example.org>
I attended the Cyrax laser surveying seminar in Minneapolis last week. It is a great device - basically a solution looking for problems. It is a bit bulky for cave use - the battery pack used for the demo is about the size of a steamer trunk and weighs over 45 kg. The standard 24V battery is 21.4 kg, which gives 4hrs use at 125W. The instrument weighs 30 kgs, is about 0.6m feet tall, 35cm deep, and 30cm wide. It sits on a standard surveying tripod. It talks to a laptop computer via a 10baseT ethernet connection.
Operation is simple. You point it in the general direction you want to survey, look at the view from the built in video camera as displayed on a laptop computer, select the area you want to survey, and let it go.
It does a raster scan of the selected area at whatever point density you have selected. It can do 1000 x 1000 points in a 40 degree viewing angle at a time. If you want a greater density, you select a smaller viewing area. The software lets you zoom in or fly around the '3D point cloud' as it is being collected - takes about 20 minutes for the maximum density survey. Each point has x,y and z coordinates as well as a reflectivity value. The reflectivity lets the software display a false colour image.
Once you collect the point cloud, you can do a 'shrink wrap' that puts a surface over the points. By selecting any two points, you can get a distance. The software can stitch several scans together if they have at least three common points. They use either reflective spheres or Scotchlite (tm) spots as reference targets. You can tie the plot to bench marks by putting a target at a surveyed location within the field of the scanner. You can also set the scanner to only collect points within a specified distance so you can drop out the foreground or the background.
The operating range of the scanner is from 0.5 meter to 100 meters, although up to 50m is recommended, at which range it can see surfaces down to 5% diffuse reflectivity. It uses an eye-safe pulsed frequency doubled green (532nm) laser, at 1mW. The location accuracy (azimuth and elevation) is within 6mm up to 50m range. The distance is within about 4 mm. That's pretty good when you consider the timing requirements.
It was also pointed out that it cannot measure the distance to moving objects. I think that they are doing a statistical sampling of multiple pulses to the same point. If the target is moving, they cannot get a reading much as traffic radars will not display a speed unless they get three readings at the same speed. The recommended survey crew is three people altough you could get along with two (or one very strong person).
The software is what makes the system great. You can select any units that you want - either metric or imperial. It can recognise common shapes, sphere, cone, plane, etc., as well as pipes, I-beams, structural steel, and valves. Then it can fill in the back sides of your image from it's database. It can give a very quick virtual reality type of image. For accurate imaging, you need to scan from multiple positions to cover all sides of the objects of interest.
Another interesting application is being able to export the scanned image in AutoCad format (it also does MicroStation DGN, OpenInventor, ASCII point stream, VRML 2.0, Alias/Wavefront OBJ, or even BMP or JPEG) so you can work with it in AutoCad or overlay it with your AutoCad drawings to see if the as-built matches the plans.
Cost: The package costs about $180,000.00 - including about $8,000.00 for training. You can rent it for $1500.00 per day (you probably need training first) or for $2500.00 per day with a survey crew. It is a bit too expensive for most of us, but could be a real time saver in the right application. It requires a bit of a paradigm shift since you are starting with a 3D image and then cutting slices through the image to get a 2D drawing instead of building the 3D image from your 2D drawings.
It would be a great tool for an archaeology dig, for mapping hazardous waste sites, for surveying storm damage (or earthquake damage) lots of possible applications.
Now if they can shrink the device to be the size and weight of Mr. Sulu's Tricorder... and sell it for under $200.00 .........
Check the Cyra web site for more info. http://www.cyra.com
If you have a chance to attend one of their local seminars or to see the scanner at one of the shows, it is worth the time - just for opening up your mind to the possibilities.
Cave Survey Maths Mailing List
After recent articles in Compass and Tape (see Press Roundup) about loop closure methods, John Halleck has started to write a definitive paper on the subject which aims to be thorough, but also comprehensible to cavers. This should finally bring to an end ill-informed debates on the subject, which has been fully understood by real land surveyors for over a hundred years. The paper is taking shape slowly on the web, with input from several cave survey software authors, and a couple of 'intelligent laymen' to stop the mathematicians writing incomprehensible things or making too many assumptions. It even tries to cater for the different terminology used in UK and American English with a glossary.
Once completed this will be an excellent resource for cave survey software authors, with example code to complement the text also online.
A mailing list has been set up to discuss developments which are progressing well. Once the paper is a bit more complete I will be publishing some of the introductory parts in Compass Points, and give a pointer to the rest of it.
Peter Sprouse <email@example.com>
David McKenzie has released a new version of WALLS, his cave mapping software. You can download it at http://www.realtime.net/~davidmck/wallsbeta/walls2b4.exe
Here is the description of the new features:
Ver 2, B4 Pre-release, Build 1999-08-17
In anticipation of other enhancements to be available this fall (such as support for image data), this release allows a project to contain data files physically located anywhere on your system, not just in the directory containing the PRJ file. A tree node's path property can be either absolute (with a drive letter) or relative to that of its parent.
Besides simplifying data sharing between different projects, this also eases management of very large projects with perhaps thousands of data files. (Thanks to Bob Osburn for first suggesting this change.)
The enhanced properties dialog is now "modeless", allowing it to remain open as project tree items are examined and rearranged. Drag-and-drop operations have been extended to include more options, including support for right-button dragging and context popup menus. Please check out the revised help file topics, "Properties and New Item Dialogs" and "Project Trees".
Some of the underlying code modules have been overhauled. A recompilation with Microsoft's latest compiler release has resulted in smaller, more efficient code.
Included in this release is a new version (2.0) of CSS2SRV, a program that converts Compass data files (CSS format) to Walls projects. For details on new features and usage, see CSS2SRV.TXT (installed with CSS2SRV.EXE in the Walls program directory).
Larry Fish <firstname.lastname@example.org>
I would like to announce a new release of the cave survey software package COMPASS. There are many major new features and lots of minor improvements.
1. Viewer. The Viewer now has the ability to vertically magnify the plot. With vertical magnification, the vertical aspect of the plot is magnified while the other dimensions remain the same. This accentuates the vertical features and it is useful when you are working with a relatively flat cave. Increasing the vertical magnification allows you to see the subtle vertical features in the cave.
2. Cave Editor. The Cave Editor has been improved so that it takes advantage of the higher resolution displays and larger monitors. The program allows the window to expand to full size and the various editing screens expand with it. In other words more rows and columns from the survey are visible on the screen. This means, for example, on a 1024 by 768 resolution display, 20 lines of survey and all columns are visible without scrolling.
3. Viewer. Improved the colour rendering of the background bitmap images. This results in more accurate colour rendering of topographic maps as backgrounds to the line plots.
4. Cave Editor. Fixed several problems that can occur when illegal characters are entered into the cave name, survey name, survey comment, survey team and shot comments fields. This usually occurred when someone copy strings from the clipboard that had carriage returns embedded in them.
There are also 11 other bug fixes and improvements.
Publication of the Survey And Cartography Section of the NSS (from the USA)
Suunto Compass Holding Methods. Eric Hendrickson describes his findings from letting many new surveying students do a set course using Suunto KB14s, both with and without a tripod. He found that tripods only made a significant differrence for first-time users. A hard surface to brace on was just as effective after some practice. He also found that about 10% of people weren't very accurate and never improved, no matter how much practice they had.
More Thoughts Regarding Simultaneous and Sequential Loop Closures. Bert Ashbrook makes some very sensible points about how the use of weighting factors and checking the expected errors of loops against the actual errors means that you can have the benefits of both 'Sequential' and 'Least Squares' loop closure.
Simultaneous/Sequential Loop Closure: Round 2. Bob Thrun follows on from his article in Issue 44 criticising Sequential Closure, rebutting Larry Fish's 'the random numbers were not properly random' argument, and making some good points about the problems with sequential closure, at least as implemented in COMPASS.
More Discussion on Least Squares Loop Closures. Larry Fish takes a step back from the argument to explain why most existing implementations of Least Squares in cave survey software deal badly with weightings, and thus produce flawed results.
Overview on Least Squares Cave Survey Issues. John Halleck, a man who actually understands surveying and statistics explains how most cavers, and many cave software authors are confused about the analysis and adjustment of survey data. How their assumptions are wrong, their software is wrong and their results are wrong. He puts this in context by pointing out that often it doesn't really matter (if you just want to draw up a nice survey), but that the incorrect or simplified analysis of data is missing a lot of information that could be useful. He also points out that, done properly both sequential and simultaneous least squares produce exactly the same answers, so the argument about which is best is simply bogus. Much of the misinformation and confusion in the field is caused by the fact that cavers only read cave survey literature, instead of general survey/maths literature, perpetuating myths. A likely reprint in Compass Points, this one.
Cave Survey Software: What Features Do You Need and/or Want? Pat Kambesis discusses survey software in general terms, considering what features you need for particular tasks.
Templates for Reading Topo Map Coordinates. Bob Thrun mentions his software for drawing overlays which allow you work out lat. and long. on US Topo maps. The scale varies across the country and these overlays sort out that problem for you.
Survey and Cartography Session Abstracts, 1998 NSS Convention, Sewanee, Tennesee.
Enclosed are examples of standard mm paper impregnated with polystyrene I have used for a long time (since 1975) in caves. This method of impregnation was found by my friend Libor Jech from Prague.
There are four examples - numbered from 0 to 3:
You should test this paper in wet conditions - try drawing with different kinds of pencils, using a rubber, washing mud from it, and so on. You will see.
Dissolve 5-20% of polystyrene foam in solvent, use polyethylene bath and simply wet one side of the paper on the surface of the solution. Dry in air, with hot air, or under infrared light - never use flame, of course. Be careful to have good ventilation.
[These sheets will be at the field meet where we shall try them out and report back. Initial impressions are favourable. Ed]
Larry Fish <email@example.com>
One of the most important problems facing cave surveyors is blunders. Blunders are fundamental errors in the surveying process and, unlike random errors, they can have drastic effects on the accuracy of a survey. For this reason, it would be very useful to know how common blunders are in cave survey data. Not only does this question have implication about the accuracy of our maps, but it also has implications for the design of cave survey software.
|Cave Of The Winds||17||13%||Colorado||2.0||3.2|
There are three kinds of survey errors: random errors, systematic errors and blunders. Random errors are generally small errors that occur during the process of surveying. They result from the fact that it is impossible to get absolutely perfect measurements each time you read a compass, inclinometer or tape measure. They are predictable, their effects are generally small and they can be dealt with using standard statistical techniques.
Systematic errors occur when there is a constant, fixed error being applied to the data. For example, they could be caused by a bent compass needle, a stretched tape or a distortion of the earth's magnetic field. In some cases, they can be corrected by simply subtracting a constant from the data.
Blunders are fundamental errors in the surveying process. Blunders are usually caused by human error. They are mistakes in the processing of taking, reading, transcribing or recording survey data. Some typical blunders would be: reading the wrong end of the compass needle, transposing digits written in the survey book, or tying a survey into the wrong station. The thing that makes blunders so important is that they can produce very large and unpredictable errors.
The COMPASS survey software has a feature that calculates the percentage of loops in a cave that are blundered. The feature is designed give you an overall sense of the quality of the surveys in a cave.
The process of finding blunders begins with an estimate of the typical errors that would be found in surveying instruments. The values are specified as standard deviations of the instruments. For example, the standard deviation for a typical survey compass might be 2 degrees.
The program then walks around each loop, projecting the expected errors through each shot and mathematically combining the result. This gives you a predicted error level for the whole loop if all the errors are random. Thus, any loop which has a total error exceeding the prediction is probably blundered. COMPASS lists the percentage of loops that exceed two standard deviations from the prediction. Because of the way the statistics work, any loop error greater than two standard deviations over the prediction has a 95.4% chance of being blundered.
Over the years, people have sent me a large number of survey files from caves around the world. I currently have more than 250 data sets from a wide variety of caves. To determine how common blunders are, I tested the survey data from a range of representative caves.
Table 1 illustrates the percentage of blundered loops in 16 caves
from the U.S. and Mexico. I have lots of smaller caves, but I
chose caves that had enough loops to give meaningful results.
This table represents the percentage of loops in each cave that
has at least one blunder. For the test, I set the predicted instrument
standard deviations at two degrees for compass and inclinometer
and 0.1 foot (3 cm) for the length measurement.
The data here represents a wide variety of caves, survey styles and surveying eras. For example, Groaning and Fixing are tight, crawly maze caves with difficult surveying conditions. Their entrances are at about 3000m (10,000 feet) of elevation and the year round temperature is 4C (39F). It is not surprising that the blunder level is high in these caves. Lechuguilla is a less challenging cave, but the chaos of large expeditions and the rapid pace of discovery produced lots of mis-tie errors. Finally, the Wind Cave data actually has surveys dating back to 1934.
The majority of the caves were surveyed by cavers from the United States using U.S. style surveying techniques. It would be interesting to know if surveyors from other countries, using different techniques would get different results.
As you see from Table 1, there are a surprisingly large number of blundered loops. In fact the average cave in the list has 60 blunders. In many ways this is not surprising given the difficult environment and the large number of measurements that make up a cave survey.
While I was working on this project, Olly Betts suggested an experiment that might show us something about instrument errors. He suggested that we gradually increase the projected instrument errors and see what happened to the percentage of blunders. The result was very interesting.
I started with 0.5 degrees s.d. for compass and inclinometer and 0.025 foot () for tape. I then tested the percentage of blunders and increased the values by 0.5 degrees and 0.025 foot for tape. I did this for four caves representing a range of survey quality. Table 2 shows the result.
I have also included a graph of the results (Figure 2) that is much easier to understand. As you can see, as the standard deviations for the instruments increase, the percentage of blundered loops drops rapidly and then flattens dramatically. The best cave flattens out at about 2.5 degrees of standard deviation and the lower quality caves around 7 degrees.
I think it is easy to understand what is happening here. As the standard deviations increase, large numbers of the better quality loops are eliminated from the group of blunders and so the percentage goes down rapidly. At some point, all we have left are loops with severe blunders that have not eliminated by the higher standard deviations. Clearly, the loops below the inflection point are blundered. You would never expect to have random errors of 10 or 15 degrees in a compass or inclinometer. Likewise, the loops at the very top of the curve must be blunder free.
Obviously, the sudden flattening of the curve represents the point at which we shift from unblundered loops (with high instrument standard deviations) to blundered loops. Thus, this point represents the maximum standard deviation for the instruments.
By looking at the graph and calculating the first and second derivatives, it is easy to estimate the point where each line goes flat. Table 3 gives my estimates:
|Lechuguilla||7.5 Degrees||0.375 ft.||11.4 cm.|
|Wind||5.5 Degrees||0.275 ft.||8.2 cm.|
|Lillburn||5.0 Degrees||0.250 ft.||7.6 cm.|
|Roppel||3.0 Degrees||0.150 ft.||4.5 cm.|
The values may seem surprisingly large, but they are similar to other experimental values. For example, the March 1998 issue of Compass Points (CP#19) has an article describing the analysis of compass errors in an outdoor test-course. In spite of a relatively simple course and the use of experienced surveyors, some of the compass errors were in the range of 6 degrees.
Measuring instrument error this way has two advantages over the traditional survey course method of determining instrument errors. First, the values are based on the combined effects of thousands of measurements, with hundreds of different instruments, done by hundreds of surveyors, using different survey techniques. Second, it enables us to look at the performance of instruments and surveyors in widely varying survey environments.
One disadvantage of this technique is that it gives you a composite error value that doesn't tell you anything about the individual instruments. It could be, for example, that the actual tape errors are much smaller and compass errors much larger than given here. Perhaps, a more complicated test would give separate values for the individual instruments.
In conclusion, it appears that blunders are a common problem in cave surveying, particularly for certain classes of cave. Also, examining real-world data is a very valuable technique for estimating the general quality of survey data and survey instruments. One advantage of the technique is that it tests the composite performance many different instruments and many different surveyors.
Ben Cooper <firstname.lastname@example.org>
The idea was to survey a fixed course by as many different people using as many different compasses as possible. The data would then be collated to determine whether there were any consistent systematic errors for a particular compass or a particular person.
The course consisted of 10 stations located roughly
in a circle, and a single central station (Figure 3). The stations
were fence posts that had been set up two years previously. The
object was to measure the bearing of each station from the central
station. Surveyors were asked to take readings as accurately
as possible, but basically in the same way as they would underground.
This would produce a data set of 10 bearings, which could then
be compared to a theodolite survey of the course.
The theodolite survey was carried out two years ago by Brian Clipstone. He used a compass aligned theodolite at the central station, which once aligned, was used to measure the bearings of the 10 radial stations. His measured angular separation of each station should therefore be theodolite-accurate, although, as is usual with magnetic readings, the alignment of the theodolite may be (and appears to be) subject to an error.
Ideally, each surveyor would have measured the course in the same way, taking forward bearings from the central station. However, in practice, we decided to take back bearings, from each radial station back to the central station. This was to
It was assumed that the back bearing adjusted by 180 degrees would give the same value as the corresponding forward bearing. Unfortunately, the data shows that this is not true. There is a magnetic anomaly at station Six of about 5 degrees, and another at station One of about 3 degrees. The former is thought to be caused by minerals or metal buried in the ground, the latter by either the same thing, or the nearby iron shed. The problem at station Six had already been spotted two years ago (see CP#19), and everyone took forward bearings for station Six to avoid it. A few back bearings were also taken to verify the anomaly. We did not notice the anomaly at station One until half way through the experiment, so there are a mixture of forward and back bearings for this station. As will be explained later, the existence of these anomalies invalidates the comparisons that we hoped to make, and drawing any firm conclusion from the experiment has consequently proved difficult.
We managed to get data for 8 people and 5 compasses. Ideally each person would have used each compass three or more times, resulting in more than 120 data sets, but in practice we only managed 24 data sets. Still, this is enough to start some preliminary comparisons. Note that a data set consists of several compass readings. Nominally, these are ten back readings (one for each station), but some surveyors also took forward bearings for stations One and Six. Two surveyors also made forward bearings for all ten stations.
The data was collated in a spreadsheet, and compared against the theodolite survey data. A residual was calculated for each reading, being the difference between the compass reading and the theodolite reading, adjusted for magnetic declination (Compass - DeclinationToday - Theodolite + DeclinationPrevious). The residual was plotted on a graph for each data set (Figure 1, front cover). To avoid any confusion between forward and back bearing data, these have been plotted separately, as data sets of the 20 locations (10 forward and 10 back). Note that not many data sets contain readings for the forward bearings (2F, 3F, 4F, 5F, 7F, 8F, 9F, 10F), and consequently these data are plotted as zero on the graph. This is why so many lines return to zero either side of the data at 6F.
The graph immediately suggests a number of features:
To ensure that the offset was not caused by a change in magnetic declination over the two years, parameters for the magnetic declination were obtained from  for the dates of the two sets of readings. The difference, however, was minimal: 4.99 degrees on 11/4/99 and 5.16 degrees on 21/11/97. Therefore the observed offset was not caused by this. The conclusion is that there was either an error of about 2 degrees in the alignment of the theodolite, or there is a magnetic anomaly at the central station. This question can be resolved by looking at the forward bearing measurements, which would be subject to the same magnetic anomaly, even if any were to exist at the central station.
|Sample Standard Deviation||1.0 degree|
|Overall Average Difference with Theodolite||-1.1 degrees|
|Standard Error of average||0.12 degrees|
Table 4 shows the differences between the theodolite and compass measurements for all the forward bearing data. An average difference at each station is shown, together with the sample size and standard error of the average.
Any variations here will be attributed solely to random and systematic errors and to the set-up of the theodolite. Unfortunately, we did not take a large number of these readings, and the 5 readings at stations Two to Five and Seven to Ten are all the work of one person. However, we do have two locations (1F and 6F) where there were 14 and 24 measurements taken by a number of different people, and which can therefore be averaged. The difference calculated for these two locations is the same, both -1.0 degrees with a standard deviation of 1.1 degrees.
Finally, because there are assumed to be no anomalous magnetic effects and the theodolite measurement is very accurate, it is legitimate to compare all of these readings with each other to give an overall average difference. Random errors will tend to cancel out, as will the systematic errors, by virtue of there being many surveyors and many compasses. The overall average is presented in Table 6.
Thus, from Table 6, it is reasonable to conclude that the theodolite was out of alignment by about 1 degree. The standard error of this measurement is 0.12 degrees, but I suspect that the error is in fact larger, because the sample is still quite small, and may contain some localised correlation (i.e. bad surveying habits!). Notice also that the size of the alignment error is similar to the standard deviation of the sample. This casts some doubt on whether the offset really is an alignment error, or caused by the systematic measurement errors of the surveyors, or perhaps just a statistical glitch in our data!
Looking back at the graph, since the spread in the data is so large (about 4 degrees), it is difficult to be convinced that there are not varying anomalies at each of the other stations. The large anomaly at station One might well be accompanied by smaller anomalies at stations Two and Ten: and similarly at station Six by anomalies at Five and Seven. For example, notice in Figure 1(front cover) that there are no measurements above the x-axis for the back bearing at station Two (plotted as 2B), suggesting a magnetic anomaly here.
In order to remove the effects of the anomalies, the back bearing data must be compared against measurements taken physically at the same location - and not against the theodolite that was located physically distant. This is partially possible by averaging the back bearing data for each station. With a sufficiently large data set, errors should cancel out, and the averaged bearing should be accurate. Furthermore, once residuals have been calculated for each station, the residuals can be compared across all stations, giving significantly more data in which to see systematic trends in compass or surveyor.
The forward bearing data has been compared against the theodolite. A residual has been calculated, adjusted by about 1 degree to correct for the supposed alignment error, as follows: Compass + 1.0 - Theodolite. The back bearing data has been compared against averages of the data at each station. A residual has been calculated using the formula: Compass - AveCompassAtStation. A positive residual therefore indicates that the compass reading was higher than the true value. The result of this approach is that all the residuals can now be compared together, irrespective of the station or direction in which the bearing was taken.
The data has then been grouped by person (Table 5), and then by compass (Table 7), to determine whether any trends are apparent. Rather than graphs, the data is presented in tables. Note that the shading indicates that the data is from a forward bearing - some forward and back bearing data for the same station have been presented in the same column for compactness.
Averages over all measurements for a given person or compass have
been calculated. Any average different to zero suggests a systematic
error. A large sample standard deviation - a measure of the spread
of the data - may indicate poor technique. An average close to
zero with a low sample standard deviation indicates good technique
and a good compass.
In the following, I have made some observations base on the numbers presented. Whether these conclusions are real effects or not is impossible to tell given the small amount of data and high degree of data processing carried out.
Notice that the standard deviations for the compasses are all moderately high, whereas for the people, the standard deviations are quite varied. This makes sense if the deviation is attributed to people error (both random and systematic), as each compass was used by a variety of people, resulting in the increased deviation. But any individual might read the compass quite consistently, resulting in a moderately small standard deviation for that person.
Compasses 2, 4, 5, and MCG all seem acceptably accurate, within 0.5 degrees of the all-compass average. Compass 1, however, seems poor with an offset of 1.0 degree. However, there are only two data sets for this compass, so this offset could be caused by people errors.
Anthony appears to have good technique and used good compasses. The three compasses he used have averages of (-0.5, -0.5, 0.5) giving an overall average of -0.5, very close to Anthony's overall average of -0.3. Whether it is legitimate to compare the averages in this way is hard to tell. If there was a large amount of data, then it would be legitimate, but with the small amount here, this might simply be a circular comparison.
Ben's reading for compass 1 was distinctly high compared to his other two readings. Compass 1 was also used by Wookey, who appears to read systematically low. Comparing Wookey's and Ben's data sets subjectively, it does seem consistent that compass 1 tends to read slightly high. The three compasses he used have averages of (1.0, -0.5, 0.5) giving and overall average of 1.0, again quite close Ben's overall average of 0.7 (again, this may not be a legitimate comparison).
Brian's data looks awful, but the raw data is actually rather good. Brian's back bearing data almost exactly matches his theodolite forward bearings, as shown in Table 8. The average of this data is zero, indicating no systematic error at all, not even the magnetic anomalies measured by everyone else! One possible explanation that has been suggested is that Brian stood while taking the measurements, whereas others crouched close to the ground. If the magnetic anomalies were caused by iron in the ground, this might explain the absence of it in his readings. On the other hand, it may be no coincidence that it was Brian who made the original theodolite survey
Julia's average is quite good, and the standard deviation is quite low. This was a surprise, partly because she is new to surveying, but mainly because on the graph her data appears to fluctuate up and down. Hers is also the biggest data set, and it is perhaps this together with a number of quite accurate readings that has improved the statistics. The four compasses she used give an overall average of -1.3 (2 x 0.1, 2 x -0.5, 2 x -0.5, 0.5). This is not so close to her overall average of -0.5, perhaps suggesting a tendency to read high.
Olly's, Rachel's and Will's readings all appear to be slightly high. Olly used compasses 2 and m (mcg), both of which gave slightly high averages, which would have contributed to his overall average. The two compasses he used have an average of 0.6, quite close to his overall average of 0.8. However, the high standard deviation might indicate varying technique.
Rachel's overall average of 0.5 is not consistent with her compass average of -0.5, and Will's overall average of 0.4 does not compare well with his compass average of -0.5. These figures perhaps suggest a systematic or random error in their technique.
Wookey's average seems slightly low. His compass average is 0.6 (1 + 0.1 - 0.5 - 0.5 + 0.5), which is not so close to his overall average of -0.8, suggesting a tendency to read low.
In summary, it does appear that systematic errors can be detected with this approach, but overall, the question is still open, as the recorded data has too much noise (magnetic anomalies), and there is not enough of it to make statistically reliable conclusions.
A final observation is that taking the data as representing a set of average surveyors and average conditions, the consistency in the results, for whatever reason, is quite poor. The overall standard deviation is 1.2 degrees, or in other words, a single reading is likely to be in error by as much as +/- 3 degrees! How do we ever manage to make a grade 5 survey?! The answer perhaps is that over a large number of survey legs, the errors really do cancel out.
The key is to ensure that all the data can be compared, preferably by making measurements from the same location and the same height. As the theodolite survey was made from the central station, it would be best to make all compass measurements from there too (ensuring that the surveyor re-seats the compass for each measurement to ensure randomness). Of course, one of the aspects that we are trying to measure is personal differences in technique, so when giving instructions on how to make the reading, care must be taken to allow the surveyor sufficient freedom for their technique to be apparent.
Secondly, a lot more data is needed. As with any experiment, any change in measurement should be attributable to only one variable. In our data sets, we were changing two variables - the person and the compass. Multiple readings for a given compass and person (at least three for each compass) are needed to see whether variations are really systematic or random. Thus given the time required to collect the data, it would be better to focus on a small number of compasses to get statistically more significant results.
 Geomag.exe, available from the web site of the Naval Oceanographic Office, DOD Geomagnetic Data Library, ATTN: Code N3422, 1002 Balch Boulevard, Stennis Space Center, Mississippi 39522-5001, USA
Anthony Day, Julia Bradshaw, Olly Betts,
Rachel Gilkison, Will ?, Wookey, Brian Clipstone,
Ben Cooper (author)
Andrew Atkinson <email@example.com>
This proposal was first published in CP a year ago. We have had some feedback since then, and some details and costings have been sorted out, as well as proving the appropriateness and cost effectiveness of the microfilm method. So it's time to publish the current state of the proposal. We have volunteers for the SW, Wales/Forest of Dean, and International/Expedition areas, but need a few more (as well as the money) to actually get this off the ground.. If anyone is interested in being a co-ordinator for their region Andrew would like to hear from you.
At one of the early meetings of the CSG concern was expressed about the amount of surveying data that had been lost in the past, and the amount of work that was being undertaken to reproduce these surveys. The question was how to reduce the loss of this survey data in the future.
Most survey data is kept by the original surveyors. This is usually the only record of the original data and the usual way for it to get lost is for the person concerned to retire from caving and forget about the data that is in their loft, but there is also the potential for loss through fire etc (e.g. UBSS hold a vast quantity of data in the library which burned down in 1981 - on this occasion luck was on their side and no data was lost, but this may not always be the case).
With this in mind, the best way to protect data is to set up a second (and ideally a third) storage place. This however, immediately comes into conflict with clubs protecting the data that they have collected, usually to sell in order to recover the costs of the surveying, and to fund further surveying exercises. The other problems are where and how to store the data, the cost of the place and the copying of the data, which although small per unit, adds up to a very large amount for all data across the country.
To overcome the problem of clubs and individuals not wanting to divulge their survey data to another outside source, it is proposed that the group (i.e. CSG, BCRA or another body set up for the purpose) that ends up holding the data, holds it under proscribed conditions, listed below, as categories 1-5. We need to define some terms here: The provider is the person, group or club that hands over the data/survey and specifies the category that it is to be held under. They must be entitled to do this, and thus will normally be the original surveyors/surveying club. The author is the original surveyor(s)/drawer(s)/club which may be different from the provider. The holding body is the organisation charged with maintaining the archive. The user is anyone wishing to access the archive.
1. Public domain - The data is stored and is free to any user for any purpose. The original author should be credited.
2. Free Access - The data is stored and is free for any user for any purpose, as long as the original author is credited. Profit may not be made but the costs of distribution may be recovered.
3. Limited Access - The data is available to any user, but reproduction and use may only be carried out with the permission of the provider or holding body. Where to gain permission will accompany the data. (i.e. the original author may pass permission to the holding body or provider.)
4. No Access - The data may not be accessed by anyone, however a list of the fact that it exists will be published. Any further enquiries will be referred to the provider.
5. Secret - The data will be stored, however no record will be publicly available. Anyone asking about data about the cave, or entrances with the same location will be told "nothing known for that site". The authors will be informed of the request unless those asking request secrecy (i.e. secrecy can be reciprocal).
Note that different types of data can be kept under different Classes. Typically the survey data itself might be Class 4, whilst the completed survey is Class 2 or 3. There will also be the facility for providers to record what information they hold but do not or cannot (i.e. due to expense) give to the holding body. This should reduce duplication of work. Anyone wishing to send locations under Class 5 may also do so with the same conditions.
After an agreed period (e.g. 5 or 10 years) with no contact with the provider, the holding organisation will try the last known contact address to warn them of the lapse. This information will also be published (with the exception of Class 5) in Compass Points or an equivalent journal. If no response is heard within two years, the data will move to the default Class, (1, 2 or 3) with the holding body becoming responsible for giving permission for Class 3 data. This will be stipulated when the data is first transferred.
The Class of the data and the default Class can be changed by the provider at any time.
Data may be withdrawn from the Holding Body as long as all the costs, charged by a third party, involved in copying the data and separating it from any other data is paid by the provider. The Holding Body will not charge any administration fee. (This is mainly to stop anyone giving the data to the Holding Body just so that they get a backup free of charge.)
Ideally this would be every form of data that goes with the cave. In reality it is proposed that the following items be included.
|A||Where the data is held and by whom.|
|C||Original data collected: |
Part i - Figures
Part ii - Cross Sections
Part iii - Drawings
|D||Corrected centre line (This is intended for authors who want to keep their original data private)|
|E||Drawn up survey.|
Data in Category A,B, Ci and D will be held in computer format.
All data in Category C will be microfilmed.
There will be five regional organisers (at the start there may have to be less than one co-ordinator per region) with a possible sixth for international data collected by UK clubs or individuals. There will be two national co-ordinators. At this time it is envisaged that only one will do the microfilming and the other will be a back up storage post.
The regional organisers will be in charge of collecting and cataloguing the information given to them. This will require each document to be given a unique number (this should follow the National Cave Register format and UIS guidelines, plus a Provider code).
This number should preferably be attached to the document before it is microfilmed. All the information will be logged in the data base with the Class and default Class and a printed copy sent to the provider for checking. The documents will then be sent to the designated National co-ordinator for microfilming and the electronic data forwarded. Data held under Class 4 and 5 may be sent directly to the national co-ordinator who will do the cataloguing. It may also be possible for the providers to do their own cataloguing and pass this on electronically to be incorporated into the local and national database. The national co-ordinators will both hold a copy of the microfilm and all electronic data. Eventually the aim is to have a copy of all the data in Class 1,2 and 3 with the local organiser. However, as film readers are expensive only the electronic data can at present be considered.
To ensure that this scheme achieves its aim the copying and storage of British data should have no cost to the provider. (However donations will be greatly received.) Grants to cover these running costs will have to be found. Although ideally this should also apply to foreign data, at present this will not be practicable so costs will have to be charged. As mentioned earlier costs will have to be paid to withdraw the data (maybe after a time this could be reduced or dropped altogether).
The database that is available for the public will be listed on the web. Any further enquiries to the national administrator must enclose a Stamped Addressed Envelope and a donation would be appreciated.
The aim is to have as much data available on the web as possible (assuming permission has been given). This will be free to access. Any data requested in a different format may incur costs. This depends on the information requested and the format it is required in.
Each provider donating data would be a member so long as the data remains in possession of the Holding Body. A committee elected once every three years will control the day to day running. Any proposals to change the protection of the data would be subject to a 75% majority, and a period of at least six months after the vote for data to be withdrawn. Any change of the people holding the data in Classes 4 and 5 must be published 6 months in advance for the same reason.
Graham Mullan (UBSS) is at present undertaking to microfilm 50 year's worth of exercise books containing data from Ireland, estimated to be 20000 frames and costing about £150. The filmers will do short runs to add to the old film, however I understand that it cannot be read until the film is full and developed. Alternatively it can be cut developed and fiched but this does not lead to as good an archive source.
|Data base program:||Free (using PostgreSQL or similar)|
|Extra storage space on PCs:||£100 for 10Gb drive|
|Fire safes, (perhaps over the top?):||2 x £100-£600|
|Total||minimum £500, £1000 with safes.|
|Copying of data:||approx £50/5,000 frames|
|Web Space:||Initially free (Chaos.org), but could be £25/year for 20Mb with suitable database support.|
|Incidental costs (post, phone)||£20/year?|
|Total||<£100/year, unless there is a lot of activity.|
|National Lottery||Very slim chance with the Heritage Grant.|
|NERC||Need to be part of an educational institution.|
|BCRA||Send this document to them.|
|BCRA Science awards||To be done.|
|BGS||Too commercial for the rules we require, may still be worth asking.|
|OS||Too commercial for the rules we require, may still be worth asking.|
|Limestone Research Group||Still looking into this.|
This is the initial proposal to reduce the loss of data. Positive criticism and suggestions are welcomed.
31 Priory Avenue,