Contactarnos
- Dirección: Réservations par internet seulement
- Email: infoSPAMFILTER@spaceobs.com
- Teléfono: No telephone
- Celular: No telephone
- Sitio Web: www.spaceobs.com
Destacado
Próximas lunas
08-12-2024 | 15-12-2024 |
22-12-2024 | 30-12-2024 |
La pagina que solicitaste no esta disponible en tu idioma por lo tanto te lo mostramos en el idioma por defecto
Publicado el 20 de diciembre 2002
The Spaceguard survey : Midway in time....
The NASA Spaceguard survey is a 10 years program, started in 1998, and aiming at discovering 90% of the 1km sized Near Earth Objects. At the end of 2002, we were exactly at the midway point, and it is therefore interesting to examine the statistics of discovery.
The data : in the first week of January 2003, I uploaded 3 files from the Minor Planet Center web site, i.e. the Amor, Apollo and Aten files, kept only the useful data (i.e. removed the headers and footers of the page, juste keeping the numerical data), removed the few objects already discovered in 2003, then merged them in a large text file, which I later used in Excel and Kaleidagraph (another data processing software available for both PC and Macintoshes). Since then, I have worked on and off on this project.
At this point in time, there were 2160 individual objects, including 274 numbered Near Earth Asteroids (i.e. with good quality orbits), 854 asteroids which had been observed at two or more oppositions (i.e. low quality but meaningful orbital elements), and the rest (1032 objects) which had been seen only during one opposition, i.e. having low to very low quality orbital elements. In these 2160 objects, there are 986 Amor asteroids, 1006 Apollo asteroids and 168 Aten asteroids.
The following graphs give the same information :
Here is the distribution for 2 or more oppositions objects : These contains both the objects which are numbered (i.e. good quality orbital data) and objects whose orbits, while not good enough to be numbered, are relatively good, and should be easy to recover during their next opposition.
Here is the distribution for one opposition objects: distribution of the observed arc, i.e. the number of day between the first and the last night of observation.
Typically an object with an arc shorter than one month will be rediscovered later, and identified with the former object only when the second opposition arc becomes larger. Such a short arc is usually almost worthless in providing information to attempt a recovery a few years later. However, it must be said that a ten days arc is in the vast majority of cases enough to make sure that the object will not collide with the Earth in the coming century. And as a reverse proposition, the orbits of such an object is so uncertain that if the object makes a close approach to Earth in the future, its large error ellipse can include the Earth giving a temporary low probability of impact with the Earth, which usually vanishes as soon as more observations are collected.
As far as the discoveries, most of them have been done by the LINEAR program, which started observing in 1998. Second comes the pioneer of all these programs, i.e. Spacewatch, with JPL's NEAT following, then Lowell Observatory's LONEOS program and the apparently stopped Catalina Sky Survey. 309 discoveries were credited to all other discoverers, except a few individuals who have done a large contribution, like Carolyn and Gene Shoemakers and the many different helpers, Eleanor Helin and her helpers at the time of Palomar, and Robert Mc Naught et al at Siding Spring Observatory, at the time when Australia had a southern hemisphere discovery program.
We can show here the magnitude histogram of the 4 main search programs. It shows the fact that LINEAR and NEAT magnitudes are more or less the same, are fainter for Spacewatch, and brighter for LONEOS.
The two diagrams above tell us that LONEOS magnitudes peak at absolute magnitude 18, LINEAR basically magnitude 18 and 19. NEAT and Spacewatch, both at magnitude 20, eventhough Spacewatch clearly is much better at fainter objects. The apparent limiting magnitudes of detections are roughly 18 or so for LONEOS, 19.5 for LINEAR, 20.5 for NEAT and 21.5 for Spacewatch. For comparison, I also included all the objects discovered under code 675, i.e. mostly photographic discoveries from Palomar 45cm Schmidt telescope.
Another interesting discovery statistic is the distribution of discoveries versus the month of the year :
There is almost a factor of 3 between the number of discoveries made in July and September. The drop of discoveries in July and also August is caused by the generally poor weather in New Mexico and Arizona during these monthes. However the surge in September can be explained by the fact that a substantial portion of objects "missed" in the summer are caught later. It would be important to check this by looking at individual objects discovered in September (to see if they were already visible/"discoverable" during july and august), but it is a likely explanation.
Orbital elements :
Here are a few graphs with some remarks :
I don't know of any reasons why this histogram is triple peaked. I am interested to hear of any physical reason. One of them might be the resonances in the main belt where the NEO originated ?.
LONEOS's 1999 XS35 which has a semi major axis of 18.001 as well as ten other objects which have 3.3<a<4.3 have been omitted for reason of clarity of the diagram.
While the inclination distribution of the currently discovered NEO matches more or less the distribution of main bel asteroids, the eccentricity is clearly different, revealing the different type of orbits for NEOs. It is interesting to see the non zero number of objects with eccentricities close to 0. It is believed that most NEO are main belt asteroids whose eccentricity have been "pumped up" by a resonance with Jupiter. Therefore these objects may have suffered other close encounters with inner planets like Mars and the Earth to have such a small eccentricity. Another group of objects is believed to be lunar ejectas from older collisions with our moon.
The cut at perihelion 1.3 is linked to the definition of amor objects (q<1.3). Objects with larger q are not catalogued as NEOs. There does not seem to exist any physical reason why the curve is decreasing for objects larger than q=1. It is likely that these objects do not come very close to earth and therefore are less detected. If objects are migrating from the main belt to the near earth region, one should expect (at least I do) that the curve be a decreasing one. The fact that the number of objects is decreasing after a perihelion distance of 1 might be a selection effect (objects which almost never get close the Earth are not detected efficiently).
The distribution of aphelion distance is very interesting since it looks like a "cut" bell shaped curve (omitting a few spikes), i.e. objects which have Q (aphelia) larger than one are not discovered since they are only visible either in the early evening or morning sky or on the dayside. It is likely that the curve is symetric, and that many more objects are there and are not discovered because they never appear in the opposition zone (i.e. are always in the daylight sky or low on the horizon in the evening or morning sky).
One can correlate some data :
Here is inclination versus semi major axis. No clear grouping appears as in the same diagram on main belt objects.
Same as above. The curve distribution is of course caused by the fact that an object having a semi major axis of 2 and and eccentricity lower than something like 0.35 is not a NEO.
Evolution of discoveries with time.
This evolution has been mainly driven by the number of active search telescopes. It is apparently also affected by the meteorological conditions (good years, bad years...). It is very difficult to establish a model for the efficiency of search telescopes. The same telescope, used in a different manner (exposing more time to get deeper for example) would not discover the same number and the same brightness class of objects.
The following animation shows this evolution by magnitude class.
Discoveries of NEO per magnitude range since 1970. The randomness of early discoveries shows the fact that nobody was looking for these objects. Then came the Shoemaker and Helin times at Palomar with discoveries becoming more frequent. After 1989, while the Palomar programs were still going on, Spacewatch started to discover a few faint objects, then LINEAR came in action (as well as NEAT, we are still in Helin times after all ) and the discovery rate exploded. It will be interesting to see the same diagram in a few more years.
A few characteristics are easily visible on these diagrams :
- The number of discovered objects since 1998 has been multiplied by 5. NASA's Spaceguard works !
- All objects brighter than magnitude 12 had been discovered long before the start of the Spaceguard Survey. But since the beginning of the survey, 2 more magnitude 13 objects have been discovered. Such objects, if metallic and arriving at high speed on earth could cause extinctions.
- Faint objects, magnitude 23 and fainter were not discovered before the 1990s.
Here is in detail the evolution of discoveries since the beginning of the spaceguard survey per magnitude class:
There are only 3 NEOs brighter than absolute magnitude 13 (Ganymede, Eric and Eros)
Asteroids 20826 (2000 UV13) and 25916 (2001 CP44) are the magnitude 13 objects discovered since the beginning of the survey. No other objects as bright have been found since.
The number of magnitude 14 objects (i.e. between 14.0 and 14.99) found every year is decreasing to about one a year. The number of objects of magnitude 15 is also decreasing, 5 have still been found in 2002. For small number of objects, it is likely that there might be large years to years variations.
It will be interesting to see how the magnitude 15 object discovery rate will behave. It is likely that the current trend will continue and that in a few year it will go down to one or zero per year (one has been found so far in 2003). The same should then happen to the magnitude 16, and 17, then the Spaceguard survey goal will have been reached. (in this page, magnitude 17 objects are from 17.00 to 17.99, i.e. objects brighter than magnitude 18).
The number of magnitude 16 objects seems to be slowly increasing each year. We are still far from having discovered all the objects in this magnitude class.
The number of objects in this magnitude range still tends to increase every year (apart from a peak in discovery for magnitude 16, 17and 18 objects in 2000) but much less so for the magnitude 17 and 18 objects and a decrease of magnitude 19 objects in 2002. The curves of magnitude 17 and 18 are almost parallel (this would tend to show than year 2000 was a "good" year compared to 2001 ? ) It would be interested to correlate this with the total number of square degrees searched by the survey, eventhough they are far from reaching all the same limiting magnitude. I know that since 1998, LINEAR has started to use a second one meter telescope, and also that NEAT started observing with the Palomar Schmidt telescope, but don't have these starting dates. It is likely that we are now going toward less and less magnitude 17 and 18 objects discoveries, while the discovery rate for fainter objects is going to get higher and higher as the survey efficiencies increase. We still however are discovering around 60 to 80 such objects per year and it would not be too surprising if we were to take a few more years before reaching a peak, then having a decreasing number of discoveries each years. Then after that peak, it will still take a few years before we reach 0 per year as is the case for magnitude 13 objects now for example.
Same tendencies as above, apart from the number of objects which overall is smaller. It is likely, eventhough I didn't look at individual statistics, that these objects are found only when near the earth.
The small number of objects causes some random fluctuations from years to years.
Out of 45 objects magnitude 26 and fainter, 30 have been discovered by Spacewatch which is the only program currently going to apparent magnitude 21 and deeper.
Where to go from there, and how ?
Asteroids are detected when they come close enough to be bright enough to be detected. If a class of objects is bright enough to be detected at perihelion and opposition, then the total population should be basically detected in little more than a year of observations. One could consider that such was the case for magnitude 13 objects at the beginning of the Spaceguard survey, in two years, all the "remaining" objects had been detected.
It is clear that we are finding less and less brighter objects (magnitude 14, 15, 16). It is also clear that the increase of discovery near magnitude 18 is smaller than what it is for objects of magnitude 22 (eventhough the total number of objects discovered per year is higher since they are generally brighter and easier to discover).
It is also clear than making a prediction for the fainter objects is very difficult, due to the small number of objects discovered, the year to year variations, the varying number of telescopes used for surveying, their survey efficiency and observing mode. Since these objects are faint enough that they are only detected near the Earth, therefore moving very fast, it depends of the detection software limits as well as the diameter of the telescope.
It is likely that no bigger telescope will be put into regular service before the end of 2007, or in such a way that it will deeply affect the current search effort. If it does so, it will not do it on brighter objects (below magnitude 20) which by then will become rarer.
It is likely that the recent update of Spacewatch 1 with a larger CCD array will increase the number of discovery of fainter objects, as should do the update of NEAT to a near 10 square degrees thinned CCD camera. It is possible that a southern hemisphere search telescope could increase globally the current search efficiency.
The best way of getting an idea of the population of objects versus magnitude range is to examine the statistics of rediscovery for individual objects. I didn't want to reload this data from the MPC. This is something which needs to be compiled on a regular basis for each discovered NEO and is more work than I care to do. Still, having an estimate of the population does not give you an estimate of how the discovery rate is going to evolve. This depends of the survey efficiency, i.e. the number of telescopes, their diameter, the number of square degrees covered to what limiting magnitude, etc... As a way of seeing what could happen, I did a simple linear extrapolation : I considered the following facts :
- Whatever error we make on the brightest objects, it is not going to affect the final result too much, because of the small number of objects, and the even small number of objects left to discover.
- It is likely that sooner or later, the number of discoveries per year for intermediate class objects (magnitude 16 to 19) is going to reach a peak, then decrease.
- The error on the number of fainter objects depends mainly of the depth of the survey. Whatever the change in survey efficiency and discovery rate for these objects, we will have barely started to discover a small part of the population during the Spaceguard survey. This number of discovered objects is only limited so far by the depth of the search program (or said otherwise by the volume of space searched).
Since one of the interesting question is to know if the Spaceguard goal is going to be reached by 2008, only the brightest objects (which do not matter too much anyway) and the intermediate size objects have an importance. So it is not too stupid to make a linear extrapolation at this point to see what the situation could be in 5 more years.Therefore I considered that in the coming 5 years we will discover as much objects as we already did in the last 5 years.
In red, the current distribution of objects per magnitude class. In blue, the objects discovered between 1998 and 2002. In grey, what it could look if we were to discover the same number of objects between 2003 and 2007 than between 1998 and 2002. This extrapolation is certainly not too far from true in the bright object domain (i.e. magnitude brighter than 17) and certainly completely false for fainter objects (20 and fainter) which could be more affected by a globally more efficient search network.
The cumulated number of objects, as of now, and as extrapolated to the end of 2007. For objects brighter than magnitude 18 (see here at magnitude 17), we would go from a total of 599 objects to 936. So if the number of objects brighter than apparent magnitude 18 is less than 1040, the Spaceguard goal as defined by NASA in 1998 will be reached. All this is very speculative of course. First we don't have a good idea of the population of objects and its evolution versus size. We don't either have a good idea of how the survey efficiency is going to evolve.
My personal opinion is that since the population of magnitude 18 and brighter objects is certainly larger than 1000, and since being midway in time before having reached the point where half of the population is discovered, it is likely that the Spaceguard goal will not be reached in time, unless larger telescopes with wider field of views are put into operation. The current survey is limited in depth, which means that apart for the brighter objects which are seen even when they are in the main belt, the others have to be closer (which happens rarely) to be detected. If a given object is only discovered at the point of its orbit where it spends half of its time, half of them are going to be discovered the first year, then 75%, then 87% and so on. Remove from that the fact that there is no southern coverage (again delaying even more the discoveries until the objects make an apparition in the northern hemisphere), and it may explain why it will take more time before we reach quasi completeness.
However, this is not too important, for with the current means, a lot has already been done, and it is likely that it was not possible to do better taken into account the available funding and the lack of interest of many so called developed countries in this type of endeavour. This is very sad, for it is clear that NEO searches have been a fantastic motor for all asteroid based science, and that using part of the available science budget for this type of study is a very good investment in future research.
Putting these discoveries in a wider context.
If we use the following diagram obtained from Alan Harris formerly of JPL, we can see the various population estimates as well impact rates.
Some preliminary remarks :
- As far as distributions, the ones who do not stick to the little that we know of asteroid distribution, i.e. around the very brightest objects where we seem to have inventory completude, are very likely wrong. I therefore discarded the Werner curves and the constant power law. As a personal feeling, I would consider the Stuart 2001 distribution to be closer from reality. I smoothed it by eye (decreasing a bit the point at magnitude 21) and extrapolated below magnitude 22.5 with a line reaching both Werner Pv=0.25 et al and the constant power law toward the extreme upper left of the diagram, which is where the Rabinowitz 2000 points seems to aim too. These populations are models anyway, and the future observations will allow to discriminate between them. In fact it is almost a paradox in the Spaceguard survey, to discover 90% of a population which was unknown by a factor of 2, and is still by at least 40% at midway point. While it can be shown (as will be later) that there is a very very small probability that there is somewhere an impacting asteroid, one of the interesting to realize that one of the outcome of this survey will be a much better knowledge of the population.
- There is a direct relation with the number of objects in the near earth environment and the impact probability. However this relation is only a mathematical one, over a large period of time. When we talk about an object class which impacts the Earth on average every one million year, we should expect to have 100 such impacts per 100 millions year. On a human lifetime basis, the number is essentially worthless. Either there is such an object on a short time collision course with the Earth, or there is not. It means that very likely there will be no impact, and if there is an impact, we will have been _very_ unlucky. But this fact does not change the outcome. When you are victim of a catastrophy, the fact that you have been lucky or unlucky does not make you less of a victim. The correct attitude which was understood by NASA but apparently not by any other institute in the world is that one should spend a very small amount of money (compared to how much the total or partial destruction of civilisation would cost, or even, to take a current example, how much it cost to destroy a single country ) in order to make sure that there is no object on a short time collision course. The result of this effort can be seen in all these discoveries, we are very likely at the middle of the inventory for objects magnitude brighter than 18, and it is likely than in a few years, we will be able to be almost sure that there is no periodic object on a collision course with us as we are now almost sure that there is no very large object (able to cause the extinction of the human species) in a short time collision course with us (short time meaning here a few centuries). Using a proportionality relation between impact frequency and number of casualty is another extreme case of calculation. My personal feeling is that one should do what is possible to eliminate, magnitude class after magnitude class, the possible danger caused by asteroids. But saying that because large objects cause more "damage per year", there are more important to discover is contrary to the simple logic that the probability of an impact with a large object is so large that normally the next impact should be by a small object. All classes taken together anyway, the impact hazard is a very low probability hazard. Its only interesting point is that impacts, contrarily to most other hazards, can be predicted, and very likely, given enough time, avoided. Seen in probabilistic term, a survey limited aimed at discovering 90% of the 1km objects is a way to move a probability of no impact of large objects in the coming century from 99.991% to 99.999% while leaving the population of objects which have a probability of no impact of 97% untouched.
- There is a relation between size of the impactor and the damage done (log of the impact energy as above). But this relation is very loose : In fact we measure, and rather poorly at that, an apparent magnitude, which we convert into an absolute magnitude (defined as the apparent magnitude the same object would have if placed at 1 AU from the sun and 1 AU from the Earth), which is then converted into a standard diameter (the abscissae of the diagram above), for which we use a standard impact velocity and a standard density to derive an impact energy. The idea behind the 1992 Spaceguard report was to discover all 1km diameter and above objects because an impact with such an object is believed to cause global consequences, i.e. affect not only the continent of impact, but because of larger meteorological effects (the so called impact winter) all of the earth. There seems to be a threshold where the number of fatalities rises sharply because of the global consequences of the impact.
The measure of the magnitude depends of the type of CCD used, of the eventual color filter in front of the CCD, the way the measure is performed. Then each object is variable : Most objects smaller than 100 km (and all NEOs are) are non spherical and therefore show magnitude variation as time passes reflecting more or less light depending of the apparent size of the object at the time of the observation. This is used to derive the period of rotation and at first approximation an estimate of the non sphericity of the objects but requires dedicated observations during several hours at least. When the object has been observed for several nights, and then during several oppositions, one can derive a correct absolute magnitude for the object. As can be seen from the first diagram of this page, most objects catalogued so far have been seen for a very short time, and only a small proportion of them have a correct photometry. most of these objects are observed only to derive their astrometric properties (i.e. orbital elements), with a few measures per night, and one cannot know if the object was seen at its maximum or its minimum. A quick look at any circular will show the rather large magnitude variations caused by different detection system and software as well as some randomness which can be caused by the intrisic variations of light of the object..
Once we have an absolute magnitude, unless we do further studies, we can not derive the taxonomic class of the object, nor its density. It is now believed that many objects are of the "rubble pile" type, and therefore have a much smaller density than solid rock.
We can for a given object very easily calculate the its speed near the orbit of the Earth. These speeds vary basically between 10 and 30 km/s. That mean between a factor of 9 as far as the kinetic energy is concerned. Then there are the real conditions of the impact, on land, on sea, in the morning or the evening (adding or substracting the Earth rotation velocity), in the summer or winter hemisphere (with various degrees of combustible matter), etc...
Precise photometry (with light curve of the object) is performed on a small percentage of the objects, and usually those who have a good orbit. Let's take a half magnitude error one way or the other. Albedo is not randomly distributed, but there are various classes of objects. C type (carbon) objects have usually a very small albedo (in the order of 0.03). M and S types have albedo ranging from 0.1 to 0.22. If we take this magnitude error, and use it for 3 types of objects (albedo 0.03, 0.1 and 0.22) we obtain different diameters. Densities seem to range anywhere between 0.5 for comet type objects and 8 for metallic objects. Impact speed vary from 8 kilometers per seconds to above 30 (even 60 to 80 for objects on cometary orbits).
Practical example :
We discover an object rated at absolute magnitude 18. But after careful photometry (likely not obtained several years later taken the current lack of effort in this domain), it could turn out to be a 17.5 or 18.5 magnitude object.
If it is a dark object, (and spherical) its diameter could be between 1.5 and 2.4 kilometers diameter. If it has a 0.1 albedo, its diameter varies between 840 and 1.3 km. If the albedo is 0.22, the diameter is between 560 meters and 896 meters. In this short example, we can see that for a given object, measured in real conditions, the diameter of the object could be between 2.4km and slightly above 500 meters. A cometary object of equivalent diameter 2.4 km and a density of 0.5 would weigh 3600 millions tons. A 1.3km metallic object would weigh around 9200 millions tons. A 500 meters stony object would weigh 196 millions tons. If the stony object impacted the Earth with a speed of 10km/s, it would deliver an energy of 9.8 10^18 joules, while the metallic 1.3km object, impacting at 30km/s would deliver 4 10^21 joules, just a factor of 500 in delivered energy.
While the impacting speed of a given object is easy to calculate, when doing a survey to detect objects able to deliver energy higher than a certain threshold, clearly the average absolute magnitude is not a good parameter. While reaching 90% completeness at magnitude 18, for example, we reach roughly 50% completeness at magnitude 19. If 20% of the magnitude 19 objects are metallic, there should be about 300 such objects. Discovering half of them means still missing 150 of them, or roughly 20% of the number of magnitude 18 objects.
However there is a need to classify things in order to give ourselves an idea of the risk. Therefore in the above diagram, we use a loosely determined apparent magnitude, convert it in a (not so precise) absolute magnitude, and consider the object is a "standard" object, which would collide at a "standard" velocity on Earth to give it an "expected" kinetic energy upon impact. But we have to know that this diagram gives only very general informations, and that a given object will have to be examined in detail before classifying it precisely (or evenmore attempt any deflection strategy should the case occur). One important parameter which is not visible in the diagram is that detection of a given absolute magnitude classe correlate pretty well with "money spent" on the survey telescopes.
Anyway, taken into account all these remarks, the same diagram can be transformed into the following table :
1 | 2 | 3 | 4 | 5 | 5 bis | 6 | 7 | 8 | 9 | 10 |
Absolute magnitude | Standard estimated diameter | Estimated number from diagram (cumulative) | Estimated number (by magnitude) | Known number in 2002 (cumulative number) | Known number in 2005 (cumulative number) | Known number in 2002 (by magnitude) | Probable number discovered at the end of Spaceguard (cumulative) | Probable number discovered at the end of Spaceguard (magnitude) | Percentage of the population at the end of Spaceguard (cumulative) | Percentage of the population at the end of Spaceguard (magnitude) |
---|---|---|---|---|---|---|---|---|---|---|
<13 | 10 km | 6! | 6 | 3 | 3 | 3 | 3 | 3 | 100% | 100% |
<14 | 6.3 km | 16 (1) | 10 | 16 | 16 | 13 | 19 | 16 | 100% | 100% |
<15 | 4 km | 60 | 44 | 50 | 52 | 37 | 61 | 42 | 100% | 95% |
<16 | 2.5 km | 198 | 138 | 135 | 147 | 85 | 177 | 116 | 89% | 84% |
<17 | 1.6 km | 519 | 321 | 323 | 375 | 188 | 474 | 297 | 91% (2) | 92% |
<18 | 1 km | 1,280 | 761 | 599 | 760 | 276 | 936 | 462 | 73% | 61% |
<19 | 600 m | 2,800 | 1,520 | 959 | 1337 | 360 | 1,560 | 624 | 55% | 41% |
<20 | 400 m | 5,400 | 2,600 | 1,308 | 1935 | 349 | 2,165 | 605 | 40% | 23% |
<21 | 250 m | 12,500 | 7,100 | 1,600 | 2460 | 292 | 2,684 | 519 | 21.5% | 7.3% |
<22 | 150 m | 39,400 | 26,900 | 1,786 | 2806 | 186 | 3,012 | 328 | 7.6% | 1.2% |
<23 | 100 m | 157,000 | 117,600 | 1,904 | 3060 | 118 | 3,229 | 217 | 2% | 0.18% |
<24 | 60 m | 460,000 | 303,000 | 1,991 | 3271 | 87 | 3,382 | 153 | 0.7% | 0.05% |
<25 | 40 m | 2,200,000 | 1,740,000 | 2,068 | 3496 | 77 | 3,520 | 138 | 0.16% | 0.008% |
The values derived are of course approximative. The number for 100 meter objects migh be between 150000 and 165000, as are all the other values derived from this diagram.The same for extrapolated asteroid number, I kept the calculation value (i.e. 3229) the real value being maybe above 3000 and below 3500.
notes :
(1) there is an error in the table there, the number should be higher
(2) Spaceguard goal reached ! IF the population model is correct and IF the simple extrapolation model is correct too... As time passes we will be surer and surer... If by 2008, we are still discovering 50 objects brighter than magnitude 18 per year, we will not have reached the goal.
Column 9 = column 7 / column 3
Column 10 = column 8 / column 4
Torino scare
Maybe there is a typo in the above title. Maybe there is not. To get some information about the Torino scale, see here.
It is a scale created to communicate the impact hazard to the public.
Let's take another look at the table above, in a different manner :
Absolute magnitude | Standard estimated diameter | Estimated number (by magnitude) | Estimated time between impact | Probability of an impact in the coming century | Probability of no impact | Known number in 2002 (by magnitude) | Probable number discovered at the end of Spaceguard (magnitude) | Number of objects remaining to be discovered (by magnitude) | Probability of discovering an impacting object before the end of Spaceguard |
---|---|---|---|---|---|---|---|---|---|
<13 | 10 km | 6 | 100,000,000 | 10-6 | 99.9999% | 3 | 3 | 0 | 0 |
<14 | 6.3 km | 10 | 40,000,000 | 2.6 10-6 | 99.99974% | 13 | 16 | 3 | 7.8 10-7 |
<15 | 4 km | 44 | 10,000,000 | 10-5 | 99.999% | 37 | 42 | 5 | 1.1 10-6 |
<16 | 2.5 km | 138 | 3,000,000 | 3 10-5 | 99.996% | 85 | 116 | 31 | 6.7 10-6 |
<17 | 1.6 km | 321 | 1,100,000 | 9 10-5 | 99.991% | 188 | 297 | 109 | 3.1 10-5 |
<18 | 1 km | 761 | 450,000 | 0.00022 | 99.98% | 276 | 462 | 186 | 5.4 10-5 |
<19 | 600 m | 1,520 | 210,000 | 0.00048 | 99.95% | 360 | 624 | 264 | 8.3 10-5 |
<20 | 400 m | 2,600 | 110,000 | 0.00092 | 99.9% | 349 | 605 | 256 | 9.1 10-5 |
<21 | 250 m | 7,100 | 47,000 | 0.002 | 99.78% | 292 | 519 | 227 | 6.4 10-5 |
<22 | 150 m | 26,900 | 15,000 | 0.0066 | 99.33% | 186 | 328 | 142 | 3.5 10-5 |
<23 | 100 m | 117,600 | 3,800 | 0.026 | 97.35% | 118 | 217 | 99 | 2.2 10-5 |
<24 | 60 m | 303,000 | 1,300 | 0.077 | 92.23% | 87 | 153 | 66 | 2.5 10-7 |
<25 | 40 m | 1,740,000 | 270 | 0.37 | 63.1% | 77 | 138 | 61 | 1.3 10-5 |
To calculate the last column, I calculated the probability of impact by a given object (i.e. dividing the probability of an impact, column 5, by the number of objects, column 3), and multiplied it by the number of objects which should be discovered before the end of the survey (column 9). As can be seen, there is 99.995% (if we take 5 10-5 as an average) that for any magnitude class we will not discover an impacting asteroid before the end of the Survey. Said otherwise, we are 99.995% sure that the Torino scale is unuseful in this context. It is only useful if we want to communicate on objects for which we don't have a good orbit, which have a very large uncertainty in the future, such that this uncertainty zone is large enough to include the Earth. It is also useful to communicate for these objects which could make very close approaches, so close that the calculation can not be precise enough, even with good orbital elements. I have not made the calculations myself, and I don't know if an object which could approach the Earth surface within 1000km could be discriminated against a real impactor long enough. Therefore the Torino scale is more a tool to generate false panics (real panics, but for false reasons) than a useful to communicate on something, which in all probability, if we ensure that the orbits of the objects are of sufficient quality, will never happen.
If we were to conduct a complete inventory, down to magnitude 25 objects, column 10 would be replaced by column 5, and the Torino scale still would be unuseful.
Why people who are faced every year with hundreds to thousands of death, like people working on Earthquakes or volcanoes do not use any predictive scale, while people who in all likelyhood will end their scientific career as well as their life long before the next asteroid impact have created such a scale ?
08-12-2024 | 15-12-2024 |
22-12-2024 | 30-12-2024 |