Soumen Nandy - email: firstname.lastname@example.org
has accumulated 520 points.
Soumen Nandy - email: email@example.com
Of course, this same equipment and technique could be used to localize almost any visible (from a tower) phenomenon in the forest. That could just be Officer Dave giving the Weekend Foliage report!
Except that people are usually not dispatched to see "Weekend Foliage". Actually the observer in the fire tower uses the alidade to measure the direction to any smoke he sees. That may not necessarily be a forest fire but could be controlled burning. He can make an estimate of distance but the actual location of the smoke has to be determined by triangulation as Soumen mentioned. Whether the smoke is actually a wild fire has to be determined by dispatching a ground observer, perhaps with a fire crew. Usually tower observers are instructed to observe and report the behavior of the smoke for a few minutes before ground crews are dispatched to minimize expenditure of resources investigating false alarms.
Well, it's the standard in the US, and possibly elsewhere, the closest International Standard (metric) is A4 at 21 x 29.7 cm (8.27" x 11.69")
If that 29.7 cm number bugged you as much as it bugged me (I mean, this is the metric system after all), here's the story: A0 is defined as a square meter in area, with a length/width ratio of the square root of 2 (118.92 by 84.09 cm) A1 is an A0 sheet, cut in half, and A2-A5 are made by progressively cutting in half, again and again. Part of the reason for defining A0 as a square meter was that it allowed one to nicely figure the weight of a sheet from the metric units for paper thickness. (useful whether you're ordering huge rolls for printing or simply figuring postage) A single sheet of A4 paper of M2 weight weighs 5 grams.
Some people think that it is entirely appropriate to treat the anthropic principle as a lazy freshman gimmick. Others see it as our only shot at an answer to the questions that keep them up at night. (Lazy freshmen seem to sleep okay)
There are actually two forms of the anthropic principle: the strong anthropic principle, and the weak anthropic principle.
Proponents of the strong anthropic principle make arguments based on the analysis of what the universe would be like if, say, Planck's constant was 1% larger or smaller. They tend to conclude that stars would not form -- or black holes would subsume all the matter in the universe -- or other dire consequences that would preclude The Universe As We Know it, and presumably would preclude any type of Life, much less the type of intelligent life that it takes to stay up nights asking these questions (as someone who has stayed up enough nights to earn the Guild Card in philosophy, I have got to wonder how intelligence figures into this. I'd guess the guys who answer The Big Questions are prone to overestimate those of us who ask them!)
I'm not sure that I can accept their analyses of what the universe would be like if they changed any of the fundamental rules of constants (the Universe has turned out to be awfully clever thus far) and I can't even say the Universe As We Know it would be necessary for either life or intelligence. (The Eager Undergraduate Research Associates I sent out failed to return with either A Life *or* Intelligence)
That's where the Weak anthropic principle comes in. It's a lot easier to assert convincingly that Homo sapiens would only pop on the scene if the Universe were 'just-so'. Perhaps there are warp-core-breaching subspace entities who stay up nights wondering why time always runs backwards and complaining why they can't develop any science because they only see the future (and hence know all effects, but none of the causes). But in the countless possible parallel and sequential universes, every time Man evolves it's always in a universe that's petty much identical to the one he's in now.
I suppose you could compare Man to a Cosmic Bore -- he always sees parties breaking up, conversations ending, entropy increasing, and time flying like an arrow. It never occurs to him that it might just be HIM.
Some more formal phrasings from Barrow and Tipler:
Weak Anthropic Principle (WAP): The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the Universe be old enough for it to have already done so.
Strong Anthropic Principle (SAP): The Universe must have those properties which allow life to develop within it at some stage in its history. Because:
- There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'. Or...
- Observers are necessary to bring the Universe into being (Wheeler's Participatory Anthropic Principle (PAP)). Or...
- An ensemble of other different universes is necessary for the existence of our Universe (which may be related to the Many_Worlds interpretation of quantum mechanics).
Final Anthropic Principle (FAP): Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, it will never die out.
What is much less well known is that Galileo's final pungent, concise, and unanswerable reply to the Pope's actions remains on display, after all these centuries in the Museo di Storia del Scienza (Museum of History and Science) in Florence, Italy. Even the least technical and literate monoglot reader will be able to appreciate the power of his refutation.
If that link is down, (it gets a surprising number of hits every day from people all around the world)try the alternate site at NASA, who have their own theory of planetary motion, and a proprietary interest in the name "Galileo"). You'll really be missing something if you don't try it!
and how did computers make study of the related branch of mathematics practical?
Soumen Nandy - email: firstname.lastname@example.org
Carya (e.g. pecan) and other nuts prized by humans have a high oil content, so they are a concentrated energy source, a delicacy of sorts. Quercus (oak seeds/nuts) on the other hand are a very common and important mainstay of the mast we see around us, but acorns are much lower in oils than many other familiar 'nuts'. They may not be as ideal a food for a hungry mammal building fat stores for the long winter, but they make up in volume what they lack in 'punch'.
Since animals are pretty well adapted to recognizing (and hence 'enjoying) concentrated energy sources, I'm sure our local woodland creatures actually do consider Carya nuts as a tasty treat. I sure do.
Soumen 'whose tastes are too highly evolved for his waist" NandyThe amount of Carya mast is a major determinant of sizes of squirrel populations in Autumn going into the winter, but the amount of Quercus mast is a major factor in the size of populations surviving the winter to begin the breeding season the next spring.
It sure looks like a depiction of the Mandelbrot set in the vicinity of the origin (I'll take that geek award now, thank you) --and since you're not cruel, I'll assume that's what it is. Of course, I'm biased. I met Benoit Mandelbrot at Boston University, before be became a celebrity, so I have a bit of a proprietary interest in him.
(Historians seeking snippets of brilliance from that historic meeting will be disappointed: due to a pair of singularly excellent parties I attended that weekend, I cannot recall anything he said. This is either a reflection of the incompatibility of mathematics and raucous socializing or a repudiation of the two-party system. I suspect it was the latter, because I distinctly recall that *many* things began to resemble the Mandelbrot set by midnight that night.)
But while it is impossible to be entirely certain of the identity of this fractal-looking set, the very fact that we can *look* at such a depiction and say "Oh yeah - Mandelbrot" is a profound commentary on the role of modern computerized depictions.
You see, the Mandelbrot set doesn't look like that. In fact, it doesn't look like *anything*. It's a mathematical set of (if I recall correctly) the count of the convergence/divergence of the iterations of f(a,c)= a^2 + c where a and c are complex numbers (i.e. the sum of a real and an imaginary component; 'imaginary', of course, is the term we use for numbers that are products of the square root of [-1]).
Actually, it's not even *that* simple. The Mandelbrot set isn't a simple graph of a function. You don't need a computer to graph a function, as we all learned in school. But to calculate the Mandelbrot set, you have to calculate and recalculate the complex function many times and see how fast it converges or diverges (or if it is chaotic). After you've done all that, you have exactly one point in the picture. Now you have to repeat it for all the other points.
Worse still, without computers, we could never get those pretty pictures that we associate with these fractal curves. After all, the Mandelbrot set is not one of those Cartesian (x,y) graphs we learned in school. If you plot the numerical data we got above in a Cartesian graph, you get a mess. Instead of a smooth curve, the points jump almost randomly up and down like the polygraph (or political position) of a tobacco executive at a Congressional Hearing.
Of course, without computers, it would have been tough to perform all those millions of calculations, but few people realize that without the graphics capability of modern desktop computers we couldn't display them in a manner that made sense to our brains! Supercomputers could have done the job long ago, of course, but who was going to assign precious hours of supercomputer time to playing with such mathematical oddities.
The 'playing' was essential. By changing the color boundaries, zooming in and out, and otherwise 'fooling around' with the numbers, we began to develop a useful set of intuitions about the behavior of such sets -- and even to understand the meaning and implications of the properties themselves.
[The Mandelbrot set is considered to be a permutation on the theme of the Julia set, a mathematical construct published long ago by Pierre Fatou (1878- 1929) and Gaston Julia (1893-1978). But while the Julia-Fatou work was considered to be a masterpiece, it was widely -well- ignored. It was simply too difficult to wrap your mind around the mathematical constructs in a rigorous way, to foresee the implications of the math.]
If you'd like to do some playing yourself, I suggest Sean Reed's Interactive Mandelbrot Generator, which uses Java to produce results much more quickly than the traditional 'download' generators. If you'd like to explore the relation between the Mandelbrot and Julia sets, try the Mandelbrot Page (and other info) at Interactive Mathematics Miscellany and Puzzles. There are dozens of other interactive fractal sites on the web.
One of the properties that were illuminated by playing with the Mandelbrot set was the concept of "fractional dimensions". Now this may sound like a plot gimmick in a bad science fiction novel (anything I explain comes out sounding that way) but in actuality, it's much more straightforward.
Most of us have heard rabidly enthusiastic math-types describe a point as having 'zero' dimensions, a line as 'one dimension", a sheet of paper as two dimensional, and Baywatch as "seriously three dimensional".
In real life, things aren't shaped that way (and I don't just mean 'Baywatch'). If you look at a real shoreline on a map, for example, it's rather ragged. If you zoom in on ever-higher resolution maps, you see more raggedness. Finally, you begin to zoom in on individual rocks and eventually atoms, getting more zigs and zags all the while, but generally, the shoreline looks equally jagged on most scales. This is called 'self-similarity' and is one of the interesting properties of fractals (and least interesting properties of Baywatch)
So, as you 'decrease the size of your ruler', you'll discover that the shoreline gets longer because those smooth-looking lines turn out to be more and more jagged. Is there any limit? Is every shoreline infinitely long? Well, since we are talking about *real life* here, and not some nightmare out of Xeno (the Greek philosopher, not the warrior-princess), there is a limit, and it turns out to be readily calculable.
A 6 inch line has a length of --anyone?-- yes, six. A six inch square has a area of 6^2 or thirty-six square inches. And a six inch cube has a volume of 6^3 or 216 cubic inches. The length of shoreline between two points 6 miles apart turns out to be 6^X -- where X is some number between 1 (for a absolutely straight shoreline) and 2 (for a shoreline so incredibly convoluted that it develops -er- rather unbelievable properties. Look up the Tarsky- Binach Paradox -- you won't believe me if I tell you)
The exact value of X for a given shoreline, mountainside, or whatever, is called it's 'fractional dimension' which was abbreviated 'fractal' (some say the term is derived from Latin "fractus" break but I don't buy it). It's an indication of how 'ragged' a surface is. Using this type of information, we can perform all sorts of useful functions: George Lucas' computers can generate landscapes that look absolutely real, scientists can model and prevent soil erosion, and fuzzy logic washing machines can get your clothes cleaner with less detergent and with less wear on the clothing.
The Mandelbrot set has this kind of fractal self-similarity. As you zoom in on it, you're constantly seeing new details, new zigs and zags -- yet it is all so self-similar that it's impossible to tell what scale you're looking at -- is the screen looking at a region a tenth of a unit across or a ten billionth?
Ho hum, right? Well, no.
You see, this sort of self-similar complexity is buried in the transition of most real-world phenomenon. In other words, when you have opposing physical forces, areas arise where the behavior is 'chaotic'. I won't decribe chaos in too much technical detail, but it essentially means "exquisitely sensitive to initial conditions". Two Asteroids may be essentially identical, and follow identical stable orbits, but after a billion years, one suddenly spirals into the sun because of a tiny deviation. The same thing happens in some marriages.
Not only does this happen everywhere, but it affects that most mundane of daily topics: the weather. Meteorologists modelling the atmosphere call this the "butterfly effect": a Monarch butterfly in Omaha flaps its wings, and six weeks later, a typhoon ravages Okinawa. (on the other hand, when a rude driver in Boston flips you a bird, and two minutes later, smashes into a concrete barrier, that's called "justice")
I'd love to stay and chat, but I've got party tonight (only one) and they tell me there's a certain Sharon who has a nice Set and is eager to meet me. Maybe this time I'll remember the conversation.
The Heisenberg uncertainty principle states that it's impossible, even in principle, to measure both the position and momentum of any particle to high accuracy at the same time. By decomposing position, time, and momentum to their basic components (x,y,z,t,m,v) we find it is impossible to determine many other useful pairs of useful properties simulatneously. [especially since these six elements are precisely the ones a physicist might wish to exactly describe a particle whose charge, and other quantum properties are fixed]
Pauli's principle is qualitative, Heisenberg's is quantitative -- which means that there is an exact mathematical limit on how precisely you can measure the position/momentum pair. If you know the position to a certain degree of precision, you can calculate exactly how unreliable your momentum determination must be, etc. using a fundamental constant found throughout physics: Planck's constant divided by 2*pi
More importantly, quantum physics is founded on the concept that math principles like Heisenberg's don't just mock our capabilities as scientists, but actually reflect some deeper underlying reality. Many common phenomena (sunshine, chemistry, etc.) can only be explained by this assumption. But since a new theory is always just around the corner, I've always been more impressed by the fact that many common gadgets (transistors, solar cells, etc.) were invented based on this assumption. These things work, and they shouldn't if Heisenberg was talking about the limits of measuring instruments -- the electrons "misbehave' on a fundamental scale, as if they knew we couldn't ever catch them.
All this is of little concern in the macroscopic everyday world we live in now. However, in the micromoments following the Big Bang, when the universe was tiny and the energies involved were immense, the value dictated by Heisenberg's principle is relatively immense -- it basically precludes saying anything more than "the particle is somewhere in the (tiny) universe. But we *knew* that, didn't we. The limitation on identical quantum states is also pretty limiting under these conditions.
So, in the Beginning, there isn't much we can say about the universe. Almost everything you'd want to specify hides within Heisenbergs limit. It actually is much *worse* than that, because familiar physical principles and constructs lose all meaning under the incomprehensible conditions of the First Moments.
Time, for example, becomes utterly meaningless. How is this possible? Well, one of the fundamental questions that bright quantum physics students find themselves pondering is: "Is time quantized?" -- in other words, does time come in tiny discrete units like so many other familiar basic qualities like charge. There are certainly reasons to believe it might. (e.g. recent accumulated bodies of observation suggest that galactic redshifts are 'quantized' -- they fall into distinct values rather than varying continuously)
But if it does come in discrete units, they must certainly be very tiny units. While I am sometimes driven to insist that *everything* takes fifteen minutes (at least) -- in an effort to maintain control over my schedule in the face of innumerable "only a minute' and 'just a sec' demands, the fact is that the universe is much busier than I am. The 'quantum of time' can't be any larger than the time it takes light to cross the smallest discrete unit of distance. If I recall correctly, that's something like 10^(-44) sec. (which is consistent with the observed redshift quantization)
But when physicists try to calculate what happened in the early moments of the universe, they find that time (one of those six fundamental Heisenberg quantities) becomes actually meaningless. Its uncertainty is so great compared to the other quantities that not only is it impossible to tell if something happens before or after something else -- but time can concurrently be flowing forwards, backwards, and (in some theories) sideways. But since Heisenberg is supposed to reflect a deep underlying reality, we have no choice but to conclude that time did, indeed, flow forwards, backwards and sideways concurrently.
Finally, there is a principle called 'symmetry breaking' which basically states that In The Beginning there may have been only One Fundamental Force, but as the Universe cooled and spread out, this force began to manifest itself in somewhat different forms - "breaking" into what appear to be different forces. The example commonly given is "perfectly symmetric" water freezing into crystals that have a specific orientation, and therefore have 'preferred directions' -- i.e. orientation acquires a significance it didn't have before.
Frighteningly, there's considerable reason to believe that the fundamental physical constants we see may not be predetermined. In other words, the values of Planck's, Faraday's, and other essential constants (who knows, perhaps even 'mathematical' constants like pi) didn't have to hold the values they do. The universe may have settled on these values early on, but in other "Big Bangs" different physical constants may have arisen. So everything about the Universe is somewhat 'ad hoc' -- a cosmic 'Just So Story"
Now let's turn to the other side of the question: religion.
Well, you can't get far discussing religion without at least some inkling of what God is/might be. One Common principle is that God is The Creator. In a clockwork universe (such as Descartes envisioned) there really wasn't much choice as to how the universe would unfold. But in the modern quantum universe there is a great deal of 'freedom' available. There might be other universes that are so different from us as to be unimaginable -- with different physical laws. God moves from being just a giant "ON" switch to being a potentially much more Active Creator. Choices were made.
The question of whether God is 'active' has always been a hot one. Some people reconcile God with the 'predictable' physical universe by simply asserting that God created (the universe He wanted) perfectly. Therefore His intervention would be both unnecessary and undesirable. (Our failure to see the perfection of this universe arises from our own imperfection) Others struggle with maintaining faith in the face of a God who doesn't intervene as often as we might like (or see as optimal)
In classical physics, there was little room for an Active, Conscious God. In quantum physics, God can decide (and alter) every interaction or existence of every single subatomic particle at almost any time. Heisenberg's Uncertainty principle becomes "The Veil of the Deity" behind which He is free to act as He will and no man can say 'nay'; as well as a description of a mechanism for true Omnipresent Omnipotence!
Are properties like "Active" and "Conscious" essential to a concept of Deity? Certainly many world religions don't think so. The Tirtankaras of the Jain religion of India (which still has millions of adherents) explicitly disavows this concept. Maimonides, arguably the foremost philosopher of the more familiar jewish faith, also argued strenuously against both qualities in his three volume "The Guide of the Perplexed". The great Christian theologians have also weighed in on this side of the issue, with such famous 'proofs' of the existence of God as "Divine Watchmaker" etc.
But most people, in most religions, seem to derive comfort (or socially useful fear) from the concept of a God who is Actively interventionalist, who is "Omnipresent, Omniscient, Omnibenevolent", and who is something very other than the serene immutable Prime Principle of many classical theologians, Jains, and scientist-philosophers.
Strangely, modern quantum physics not only allows room for a place for such a God to exist, it even gives, after a fashion, a possible address.
Too bad physics doesn't give us street directions as well. I've got some vacation days coming.
To many people, a 'steady state' universe (unchanging on the large scale) was a more 'perfect Creation' than a changing universe. Alas, there were a number of physical discoveries that mitigated against a steady state. Olber's Paradox (well-known in Einstein's time) argued that the universe could not be infinitely old, infinitely large, and unchanging, because the night sky would not be dark: every point in the sky would be filled with the light from some immensely distant star, derived uncounted eons ago. Hubble's discovery of the expansion of the universe (progressive redshifting of distant galaxies) meant that 'steady state' had to be modified with 'continuous creation' -- where new matter is constantly created to 'fill up' the expanding universe, and prevent a 'thermal death' of the universe as everything receded infinitely far from everything else. New modifications in the Steady State theories arose as fast as new 'disturbing' observations were made.
One lay description of the physical meaning of the Cosmological Constant would be "a repulsive force that acts to counteract the constant attractive force of gravity" (and would tend to cause the universe to collapse). As you can see, it was part of a long tradition of trying to maintain the 'perfect' nature of a 'fundamentally unchanging', 'symmetric' universe -- but it was more than just that. You see, without a opposing force to gravity, field theory produced all sorts of 'ridiculous' (now accepted) notions such as "a perfect vacuum has an intrinsic positive energy " (in a gravitational field -- which, absent an opposing force, would mean 'anywhere in our universe')
I'm not sure that it is possible to say that a single discovery or observation firmly disproved the Cosmological Constant, since the idea is far from dead even today. (though it's philosophical intent and consequences are no longer the same) However, I think the 1964 discovery of 3K radiation (radiation remnants of the Big Bang, observable in all directions in space; which won Penzias and Wilson the 1978 Nobel Prize in Physics) represented fairly solid proof of the Big Bang. If the Universe began in a Big Bang, Steady State became much less tenable!
Still, I sympathize with Einstein. God doesn't listen to what I tell him to do, either!
Soumen Nandy - email: email@example.com
Well, if there is one thing that history has taught me, it's that people will ferment or smoke just about anything -- once. And if it's foul-tasting and black, they'll use it as a substitute for coffee. Anything. Even that stuff they sell at Starbucks. Makes me wonder about the human species. It really does.
Well, of course, the first thing that pops into mind is that old Southern tradition: chicory. But when I translate 'chicory' into Bostonian, I decided it couldn't possibly be the plant you discussed. In Boston, chicory leaves are sold in trendy grocery stores as are gourmet versions of 'endives' (for the narrow-leafed form) and 'escarole' (for the broad-leafed form) -- only the very folksiest, back-to-nature stores (i.e. the most expensive) stoop to calling it chicory, and sell the roots, from which the coffee substitute is made. (True chicory is closely related to the true endives, and escarole is really broad-leafed endive -- the stores are playing name games)
You specifed that the *seeds* of the mystery plant were used as a coffee substitute. Besides, escarole and endive look nothing like the picture.
The second thing that popped into mind (recalling whose web page this was) was the good old Kentucky Coffeetree Gymnocladus dioica, which is better known as an exotic wood than as a food product -- with good reason, since it has known toxicity. (oops- re-reading the question, I see you already mentioned that -- sorry)
Actually, I've seen it listed as G. dioica and G. dioicus -- which makes a sort of twisted sense, since the tree is "dioecious" (meaning it has male and female individuals) and dioeca and dioecus would be the male and female forms of the same word, in Latin. But I don't think that botanical Linnaean binomial taxonomy works that way. Either a (contemporary) Somebody screwed up or a (historic) Somebody couldn't make up their mind. I used 'dioica' because the names I've seen for other plants seem to favor the feminine dioecious name (Urtica dioica = stinging nettle; Silene diocia = red campion; Bryonia dioica = white bryony; and many others)
The Kentucky Coffeetree seems to meet all the criteria you listed -- but I'd like to hear more about the need for scarification before germination. Care to enlighten us?
The seed coat is so hard and thick that the seed cannot germinate until it is broken. Since it has a high moisture content this is usually taken care of in nature by winter freezing. Once when I decided to grow some seedlings to transplant into my lawn, I simply used a file to file through the seed coat in three or four places per seed. When I planted the seeds they then germinated just fine. - Duane
Actually, PGP itself is a hybrid system -- but I may not have time to explain it specifically, so I'll just discuss the properties of dual key systems. It's also worthwhile to note that you don't have to do anything very fancy to get a pair of keys -- they're just numbers after all. Several sites exist that will generate a pair of keys upon request. However, as with all such things, there are issues of trust -- you trust the site to *not* keep a record of the keys it sends you, and to *not* create faulty keys that it knows how to crack. The same applies to any web pages that claim to link to the key sites.
The basic trick to these dual key systems lies in the fact that if you encode a message with one key, you can only decode it with the other. Yup, even if you use a password (key) to encode a message, you can't decode the message you just encoded!
This leads to two interesting properties: universal accessibility and verifiability. Since knowledge of only one key is enough to let me encode a message, but not enough to let me decode it, I can publish one of the two keys in a 'PGP phone book' -- anyone can look up my 'public key' and use PGP to send me a message that no one else can read without using my 'private' or unpublished key (which I keep safely tattooed on the inside of my eyelid)
Also, if I code a message with my private key, any one can decode it using my public key -- which is not as useless as it sounds. It may not do much to protect the contents of the message, but it absolutely guarantees that *I* sent the message. If someone coded a message, without my private key, and claimed it was from me, the coded message would decode to... well, garbage. This is the *verifiability* of the dual key system.
The basic dual key system can also be extended to other uses, through just a bit of cleverness.
For example, suppose I want to take advantage of the verifiability of sending a dual-key message, but I don't want the message itself (a USENET post, for example) to be encoded -- I want everyone to be able to read it instantly, while still having the benefits of the dual key.
Well, one easy solution would be to create a "digital summary" of the message and encode *that*. A digital summary isn't a synopsis of the meaning, it's just a binary cross-check of the pattern of bits in the message -- like CRC or other schemes for error-correction in files and modem transfers, it only contains information about the pattern of bits, It doesn't concern itself with the meaning of the message. This is sometimes called a digital signature (a term that is also applied to other things)
The digital signature can be checked precisely to determine that the sender is who they claim (because the coded signature decodes properly), and that the message is unaltered (because the decoded signature contains a digital summary that matches the message). As with handwritten signatures, it's never exactly the same twice -- but unlike handwritten signatures, a digital signature can actually vouch for the message its attached to. If you try to 'cut and patse' a properly executed digital signature onto an altered document, the forgery will be easily detectable.
If this reminds you of some of the new digital finance techniques -- smart cards, intelligent wallets, genius bills, and dimwitted tellers -- well, you're perceptive as all get out. Have a piece of pie.
There's a whole wonderful world opening up, with people creating clever uses for dual key. Sometimes you my want to encode or "sign" a message using your private key *and* the recipient's public key -- to prove authenticity and guarantee security.
One warning: most dual key systems are based on very large prime numbers and derive their security on the fact that "factoring" a large binary number (i.e. your message, in dual key code, is just a huge number to the computer) is a very difficult process at the present state of mathematical knowledge. Right now, it may typically take weeks or decades for a cluster of computers to decode a single message (or it might pop out in a millisecond -- it *is* a matter of luck). Tomorrow a budding Einstein may come out with an equation that makes it a matter of seconds. (and he'll undoubted be spirited away by the first large organization to find out about it)
Fortunately, we don't have to worry about this. The truth is that all our messages can probably be read much more easily than that. For one thing, most of us don't memorize our 128-bit key. We let a computer program do that, and then we use a pithy password to prove to the computer that we are indeed ourselves. Obviously, anyone who can use our computer only needs to know that password to send messages (charge $36,000 in pornography to our Visa; transfer our bank balances to a numbered Swiss account; tell that girl in Biology class we like them; post schizophrenic ravings on USENET claiming to be official representatives of our company, etc.)
And if you let *that* happen, the verifiability of the signature could work against you. You would have a tough time claiming 'forgery'
The only cure for security is more security. If you don't read the newsgroup comp.risks (or subscribe to the e-mail digest version) I suggest you start. Ignorance of securioty has already ruined many lives, and it's going to get worse before it gets better. (Besides, RISKS usually has at least two or three good belly-laughs a week. Who doesn't love hearing about multinationals screwing up big-time?)
If you really want to know the details of PGP and the RSA algorithm it's based on, try reading this first. It's only a layman's introduction to the math, but it may be enough to make you decide you don't care after all.
If God *did* create life on Earth alone, then the unimaginable vastness of space would be an unbelievably inhospitable desert with one unimaginably tiny speck with a giant neon sign reading "Last nice place for H*U miles" (where H is Hubble's constant, and U is the age of the universe). We wouldn't need to go to Heaven, compared to the rest of the Universe, we'd already be there. I like to think that if God did Create life on Earth alone He would be so bemused by the contrast that He'd have to create an entire race of intelligent flatworm scientists just to mock The Legend of Earth as intolerably improbable.
Of course, some of the tenets of the "non-special Earth" theory have come under fire lately, since recent we've actually begun to be able to detect planets in other solar systems. We quickly found many candidates -- and every one was 'bizarre' or highly improbably by the standards of then-current planetary formation theory. (Of course, these bizzarre planets were exactly the only type we're currently capable of seeing, but we were amazed to find so many, so quickly)
But I think such criticisms miss the mark. The recent discoveries suggest that there may be greater diversity in planetary systems than we suspected, but it does nothing to prove we are intrinsically special in some way. We've been extrapolating from our one puny example, and we may have made too many facile assumptions. It's like American business in the 1990s, one day the data comes in, and you realize that everyone is *not* a Caucasian Male Heterosexual Anglo-Saxon Protestant -- and that you can't go on acting as if every 'who counted' was! And slowly, you realize how many ways you've been assuming precisely that.
Am I mixing metaphors -- "typical" planetary system vs. "diverse" human populations. You bet I am. Because all our decisions, all our actions, are human ones. We are always at the center of our own existence -- but we should be looking outwards. The question is less "Is anyone out there?" than "How will we react when we get out there?"
We all know how the evolution of thought progressed from the geocentric to the heliocentric to the relativistic (no absolute frame of reference). Sagan characterized these changes in paradigm as the "Great Demotions" --each removing us further (in our own minds) for the center of all being and creation -- with each step we considered ourselve less as necessary and sufficient _raison d'etre_ for the universe. but I think he was oversimplifying for his audience. We rarely realize how many millions of subtle gradations there were in the thinking along the way, and how many fields this fundamental concept changed. Our language is almost entirely constructed of concepts that will have to change if we ever meet Equals in the universe. After all, calling someone an "animal", a "vegetable" -- or 'objectifying' or 'dehumanizing' them in any way -- is entirely offensive.
How will this play to alien ears? We don't respect animal, vegetable, mineral, or anything but ourselves -- and we barely tolerate each other. I too, thrill to the stirring works of human art: "Oh what a piece of work is man, in form how express and admirable, in apprehension, how like an angel" ("In memory, how like a Jell-o rat-trap" - author, knowing he mangled the quote). Okay -- if we're the best --the only-- the Universe has to offer, that attitude is fine. But I think even God must get tired of our snotty attitude - - that's why he invented The Fear of God, that he puts into each of us from time to time.
Sagan's insistence that Earth is not special was not unusual, nor was his enthusiasm for the subject -- but his vocal *public* persistence (how many conversations have *you* had on this topic?) and his written eloquence were.
Over the course of his written and professional work, he delved into the actual astrophysical properties of the observable universe, the history of science (revealing that the 'actual' methods of science has, and probably always will fall short of the Scientific Method), the role of science in society, the role of superstition in society, and other topics. While I have not yet read it, his (final?) book "Pale Blue Dot" seemed to be the social and psychological culmination of these many trends.
I have long felt that the effects of contact with an extraterrestrial civilization would be so profound and dislocating that we would scarcely be able to recognize ourselves a century hence. We've been 'the only child' of the universe for far too long. Our isolation has forced us to make assumptions that permeate every crevice of the fabric of our thought. The sudden reality of other intelligence will be devastating in all realms.
Having said that, I don't think our current isolation is healthy for us as a species, unless of course, it is a permanent feature of an otherwise uninhabited Universe (and even then, I don't think it's served us very well). If indeed other intelligent races exist, it is not reasonable to ask "should" we seek out contact -- if we continue to grow, if we ever venture forth from the house, we *will* someday encounter other "children". Project Phoenix is a search for extraterrestrial intelligence. It allows us to seek contact with those who also welcome contact. It may be a risk. It may be futile. But it is essential.
The longer we wait, the less likely we are to be able to 'play nicely'; the more we will have wrapped ourselves in a muslin of vain imaginings; the bigger the eventual shock. Adolescents often prefer to wait until some imagined "date of future readiness" to take on the burden of responsibilities. Children wish they could be big and strong now (as if that would solve their problems -- though their fantasies of what they would do with that power reveal that it would only cause more problems)
Human history has not shown us that "increased civilization" necessarily creates acceptance towards outsiders and productive cooperation (e.g. the highly advanced Chinese empires have millenia of history with two primary foreign policies: exclusion or takeover). It *does* show that only contact can teach tolerance, and that gentle contact "at a distance" is often a good start. We cannot choose perpetual isolation as an option --any race that would choose to ignore our wishes may well have other unappealing intents as well.
So, I ask you, if *you* were in a new neighborhood, and uncertain about the neighbors, would you let your child outside to play? And if not, what is the predictable consequence of a childhood without friends?
Your fifteen year old (knowing full well that he will never have to face the world alone) thinks he can judge the world based on a vast decade and a half of experience -- but his brief experience makes him very focussed on *this* party, *this* embarrassment, *this* school year. Humanity as a creator of civilizations is still in that period of explosive learning and growth that characterizes both early childhood and adolescence -- and we, truly alone without a guide or example, have no idea which we are in. I like to think we will still be around in a million years -- but only if we quit pretending that only the next century or two counts.
It's not that there isn't much to be said about transcendental numbers,
but rather that there is too much. This species of number is irrational,
which is to say that it cannot be expressed as the ratio of two integers
(e.g.: pi, e, the square root of 2, sine of 3, sigma(1/100)^n as n
increments to infinity, etc.) but transcendental functions are also
impossible to express as the root of any algebraic equation with rational
coefficients -- you know
a*x^n + b*x^(n-1) + c*x^(n-2)... = 0 where all letters (except x) are rational
So the square root of 3 is irrational but not transcendental (it's the solution to x^2 - 3 = 0). OUT! Worse still, there are many functions that produce these transcendental numbers (except in special cases) such as the circular trigonometry functions (sin, cos) and hyperbolic functions (cosh, sinh). These *functions* are immensely useful (YOU! Yes -- you, the guy who sniggered "S-u-u-u-r-r-e, they're useful." Go back and live in a cave. And leave that remote behind when you go!) but these 'transcendental' functions cannot be expressed by any finite series of algebraic operations with rational coefficients. Period.
So there we are -- using functions that can't be described as a finite series of algebraic operations, to produce numbers that can't be expressed as a finite ... how did we get here! This can't be *important*
It's hard to believe that math changes and that people actually *do* learn to routinely manipulate the "mathematical mysteries" of bygone eras. The concept of a 'zero' was once much more mysterious than today. The early Phoenecians scoffed at fractions ("One-and-a-half? What? You expect me to believe in a number that is bigger than one but less than two?") Even medieval mathematicians relied on written charts instead of memorizing their "multiplication tables". And today, high school students grumble about doing math without *programmable* calculators and doubt the utility of the 'imaginary numbers' that built the school, the factories for the building materials *and* the calculator.
________ Try dividing MMCMLXXVMMCDXLVII by LIX using Roman numerals
When the transcendental numbers were first explored, they were considered utterly mysterious -- beyond everything that was known or even knowable -- hence the modern name "transcendental". Indeed, the Pythagoreans (who were a mystico-religious order, as much as they were mathematicians) held the irrational and other properties of pi among their deepest secrets. Death was among the least of the punishments for He Who Spilled The Beans! Even later 'more modern' minds boggled at the mysteries hidden in the transcendentals: Descarte swooned, Leibnitz was giddy, even I ... think I need a cup of coffee before I go on.
One consequence of being irrational is that trancendental numbers are nonterminating, nonrepeating decimals. Some rationals are nonterminating (1/3 = .33333...) but the fundamental nonterminating *non-repeating* decimal (like pi) can be seen as containing the depths of the universe -- all possible numbers in one. Indeed, trancendental numbers (or combinations of them) are sometimes used as pseudo-random number generators (with the 'pseudo' left off by the Marketing Department) and they seem to function quite well.
Part of me wants to go into the spectral function of the decimals in pi vs e vs. other trancendentals (i.e. the patterns of various digit combinations in the decimal part of the transcendental) -- but I realize that I am probably the only one who would care (and sometimes even *I* have a life, and don't care). Suffice it to say that the sum of (1/100)^n as n goes from 1 to infinity would be a nonterminating, nonrepeating number with a very boring and predictable nature: (.01000100000100000001... or [1 zero]-one-[3 zeroes]-one- [5 zeros]-one-[7 zeros] ... you get the idea) so nonterminating nonrepeating numbers aren't automatically deep or fascinating -- but the "really important ones" (again, like pi) actually seem to be very rich with both meaning and *profound* randomness.
So having said this much about the transcendant (which Kant defined as "being beyond all possible knowledge and experience") I'm afraid I must stop. Not only is the pursuit futile -- but I see a mob of angry Pythagoreans milling up the street.
Soumen Nandy - email: firstname.lastname@example.org
Boring insects like the Locust borer and horn-tailed tree borer can impact trees that are exposed (i.e. by the road side) and/or weakened (i.e. by heavy traffic, exhaust, or very dry soil). However these borers manifest primarily in the trunk, where they attack the cambium and eventually bore into the heartwood. Signs include yellowish sawdust deposits, holes, scars, gnarling, and peeling bark, but not, as far as I know, specifically the inner tissues of the leaves. [To those who take umbrage at my labelling these insects "boring", I must say that my only exposure to horn-tails was outside Portales, NM over ten years ago. There a friend pointed out the unusual spectacle of what appeared to be large wasps 'stinging' a locust. While we speculated as to what offense the tree might have caused, we definitely concluded that any insect that could find a *tree* threatening must lead a very bland life indeed]
Locusts will also tend to shed leaves as a protective measure if their foliage outstrips the available water supply. Transpiration, the process that drives the 'pull' of water from the roots to the leaves depends largely on the surface area of the leaves. By shedding excess leaves, the tree can decrease it's water draw -- this is a survival characteristic, not a weakness, allowing locusts to survive in very dry, sandy -- or even salty or alkaline- soil where other transplants might not (hence its presence in New Mexico, where it is not native). Obviously, the tree needs to retain enough foliage to meet both present and future nutritional needs. Presumably, the need for leaves decreases at the beginning of the fall when the most of the winter's carbohydrate needs are already stored in the trunk and rainfall in many areas decreases. Still, while Interstates are well-drained, I wouldn't expect this to mean unusually dry soil beyond the right-of-way, and I would expect such leaf loss to begin at the key cell layers at the base of the leaf stem, not those on the underside of the leaf.
At first, I'd guessed that we are speaking of that perennial pest of the Black Locust, the Silver-Spotted Skipper. This robust mariner is known for its jovial dim-wittedness punctuated by outbursts of violence. Often found on desert isles in association with white-topped Gilligan... oops. Sorry, that's the Silver *haired* Skipper.
The pest form of the silver-spotted skipper (Epargyreus clarus), a common butterfly, is the larva stage which spends the day in a silken nest (often in leguminous plants) and feeds at night. It's been known to infest commercial soybean farms in areas where host populations of its favored diet, the black locust, have declined (so all the hot research money is in its *soybean* life cycle. Fortunately it's very susceptible to viral infections and is not yet a major pest in soybean. It also doesn't seem to produce the symptoms described. (*sigh*) So much for my knowledge of pests. Rather than The Skipper, I should have consulted The Professor... which reminded me ...
Up in Anne's territory (Vermont), they have an insect called the Locust Leaf-miner, Chalepus dorsalis Thunb. I imagine that anything that can survive the inhospitable New England winters (Hey, I'm in Boston -- and facing the rather drearier prospect of an "in-hospital" New England winter, as the days shrink like the Malaysian national currency, September begins to show its true colors. "Garcon, un Prozac double, s'il vous plait! Vite!") -- Well, anything that can survive *that* must *thrive* in the sunnier Southern climes of my birth. Certainly, it seemed a likely suspect from the name alone!
Sure enough, the locust leaf-miner is well-documented to produce exactly the symptoms described at precisely the location and times described.
The locust seems to have a peculiar relation with insects. Aside from sharing the name of the most famous Biblical insect (the Biblical locust is akin to swarming grasshoppers and cicadas), and being a host for the silver spotted skipper and locust leaf-miner, it's a favorite of honey bees, which produce a particularly delicious honey from its pollen.
Of course, there are all the usual details that make this answer not quite accurate. Early Windows 3 systems sometimes considered 64K (65,536) colors to be "true color" (I believe that 'high color' came to be the accepted term for 64K colors, but I recall seeing that term used for other systems with colors in the thousands, from 1K to 256K)
Why all this bickering? Well, in the first place, MS-DOS computers last had a precisely defined widely-used standard with when VGA was defined (a few later standards never achieved universal acceptance). VGA was adopted a *long* time ago -- over ten years. VGA only supported 16 colors at 640x480 resolution in it's maximum mode.
It's now common to refer to anything with more colors or better resolution as SVGA -- Super VGA -- but that term never had a specific definition across all manufacturers. It has simply come to mean "better than VGA', and can mean 256 (or more) palette colors *or* resolution better than 640x480 pixels per screen. Other terms, like EVGA (extended VGA) or UVGA (UltraVGA) varied from manufacturer to manufacturer and have pretty much passed into disuse.
It's useful to note how color works, both in the Real World and in computing, and how video cards and monitors work together to bring you the image you see on your screen.
Most monitors are analog displays, which means that they are capable of an effectively infinite number of shades of color. This may sound fantastically sophisticated -- until you realize that your TV is also an analog display. The video card does all the necessary internal computations to 'draw' the picture on the screen, and then sends an analog signal to the monitor that tells it what to display. Even most modern 'digital' SVGA monitors (unlike the digital displays of some older standards) actually recieve an analog signal -- they are 'digital' because they process the signal (and adjust the settings) using a built- in microprocessor. The *real* limitation on a monitor is usually the number of pixels it can display (640x480, 1024x1280, etc) not the colors it can display. The video card is generally the culprit when it comes to color compromises. The video card has a limited amount of memory and, as I'll discuss later, this has to be traded between color depth and the screen resolution.
There are several ways to model color information. Photographers often use a "CMY" model (cyan, magenta, yellow) because these are the true additive primaries (when printing color film, you are *adding* light of different frequencies to the film) while kindergartens teach the rest of us the RYB (red, yellow, blue) model of *subtractive* 'finger paint' primaries (fingerpaints *absorb* light, the color we see is the light that they fail to absorb -- hence reflect.
Why are there different sets of primaries? Weren't we taught that the primary colors are immutable facts of nature, like gravity, naptime, and raising your hand to speak? Well, when you *mix* colors it makes a big difference if the Display medium is luminous (TV screens, film printers, televangelists) or reflective (fingerpaints, photographs, televangelists).
Luminous media, like TV screens, *add* frequencies (colors) and the result is generally brighter than the brightest parent color. Reflective media, like paints, *absorb* different frequencies (colors) and what you see is what's left. Mix subtractive primaries, and the result is darker than the parent colors.
These properties of primary colors explains an observation that befuddled generations of Preschoolers Educated in the Scientific Technique (or "PESTs" as were were commonly known): The sun puts out a huge number of frequencies (colors) and sunlight appears white, but you can't get white by mixing all the cans of tempera pain the teacher cunningly hid in that locked closet. If you mix enough colors in a luminous media, you get an approximation of white (like sunlight, composed of different frequencies). If you mix enough colors in a reflective media, you get an approximation of black (well-- olive- green-brown-black, like a kindergarten floor by 2:30. Few frequencies of light escape -- heck, my *shoes* stick to the floor)
Photo-quality color printers often use CMY-based inks to re-create the photo-like colors. They use the same color model for the absorptive inks as the (luminous) computer screen saw where the user painstakingly adjusted the colors -- otherwise you have to do lots of fancy adjustments, based on the specific color properties of the printer, specific ink cartridges, etc.) Of course, as we have noted, additive primaries (in theory) produce good whites, not blacks, so most good color printers use an additional black ink - the so-called "CMYK" model (K is for blacK, because B is already taken by Blue). This is also useful if the printer is used for text -- so look for a printer where the black ink can be changed separately, rather than changing all the colors when you run out of black.
Of course, those early technical engineers all went to kindergarten (or primary school) and learned the RGB primaries. Few went to photography school and learned the CMY primaries. Therefore, even though a pixel on a screen *glows* and is therefore a candidate for the additive primaries, the electrical signals/cables in VGA are based on colors called Red, Green, and Blue. These are not quite the same as the Additive or Subtractive primaries. For one thing, in the RGB system "Red plus Green equals Yellow" (no matter what the ZipLock commercials and Ms. McGillicuddy said). RGB is effectively another set of additive primaries. (A quick look at a color wheel will show why this works) Fortunately, most art/photo software 'talks' in the language of CMY, so users don't have to confuse themselves with the details of the hardware color model. HTML authors use the RGB standard directly when defining text, background, or other HTML colors, leading to such evocative color names as #B0CD8E (128 parts red, 205 parts Green, 143 parts blue)
Since no one understands the primary colors, what hope is there for the "rainbow society"?
Incidentally, if you are familiar with televisions, you probably use the 'brightness and contrast' settings in your art programs too much -- you can achieve better results if you use the strange sounding "Gamma" control, which adjust the 'luminosity' of the colors to match your display. I've seen many computers where adjusting the default gamma eliminated the need for *numerous* adjustments of brightness/contrast (with better results) for *every* picture scanned/displayed!
[If you are *not* familiar with television, then why aren't you at the meeting of the Perfidian Redactor Fleet, being held this very minute! Gamma himself will be speaking on "Color-coordinating Laser and Plasma Bursts to Create a Truly Aesthetic Invasion"]
So why does GIF pick a palette of 256 colors from a possible 16M colors? Well, there are 256 possible values for one byte, so we can use one byte to describe each pixel. However, if we pre-assigned the 256 colors, we would find that almost every color we used would be not quite the exact shade we needed. (All veterans of the Crayola 8 pack know this. Adjusting pressure and mixing colors could give you more than 256 colors, but they were never quite the right colors) By picking the "best" 256 colors *for that specific picture* out of the full 16 million possible colors, you get *far* better results.
Incidentally, all those demos in the computer store make it seem that the 256 colors of an inexpensive video card will give results almost as good as True Color -- this simply isn't true in the real world. In the first place, in the real world (viewing the web, for example) we often have two or more images on screen at once -- which forces the browser to make a lot of guesses and 'fudge' to show both pictures, with their individual 256 color palettes using only the *one* 256 color palette it really has available. Also, even a single 256 color photo will suffer because Windows "steals" 16 of the colors on the palette and reserves them for its own set of 'basic' colors -- the VGA colors (which bear a strong resemblance to the crayola 16-pack) Finally, those demo pictures are carefully 'dithered' -- pixels of different color are placed side by side to create the illusion of an intermediate color. It's been my experience that when your browser 'dithers on the fly', making quick guesses to display a high- or true- color picture on a 256 color screen, the results aren't as good as when a manufacturer carefully tailors a picture for optimum display on their hardware.
The biggest trade-offs in allowing larger palettes come from the memory required to store the picture, and the speed of the video card. VGA uses 640x480 pixels and only needs half a byte (4 bits) to display 16 colors, so it could actually display an entire screen with about 150K of memory. Move to VGA *resolution* at 256 colors, and you double the amount of memory required *and* increase the amount of computational work. In the days of when computers usually used 8-bit processors as the CPU, it seemed reasonable that a VGA card might use a primitive 4 bit CPU, but rather outrageous to use an additional 8-bit CPU 'just for pictures'. Nowadays, I imagine even most VGA computers are actually using this form of "SVGA": 640x480 x 256 colors.
"True Color" (16M) takes three bytes to represent every pixel. And the very people who like to see true colors also like to see the picture with a higher resolution (more pixels = more detail), so 1 megabyte is pretty much the minimum video card sold today -- such a card displays 'true color' at 640x480 pixels, 'Hi-color' (65,536 colors) at 800x600, and only 256 colors at 768x1024. Nowadays, the video chips are much faster -- and specially designed for the task of video processing -- so they can handle the additional processing load with no problems. It is often possible to simply add more video memory to a video card for a (relatively) cheap upgrade to more colors at whatever screen size your monitor supports.
Soumen Nandy Vice President of Media Intercompatibility, Hardware Security, and Vandalism Prevention Crayola Loathers International (Translation: No! You *can't* draw on my Monitor Screen!) Associate Chairman, Pre-school Educational Review Committee (Translation: Everything I needed to know I learned in kindergarten. So how come there are so many college-educated morons?) High Pedant (Translation: first against the wall, come the revolution)
Anne Lurie - email: ALurie6171@aol.com
potash: 2,250 lb. of 0-0-60 = 1,350 lb. phosphorous: 1,957 lb of 18-46-0 = 900 lb *plus* 352.26 lb. nitrogen nitrogen (remainder): 288 lb. of 34-0-0 = 97.92 lb Total: 449.98 lb. nitrogen, 900 lb. phosphorous, 1,350 lb. potash[Interestingly enough, I was not able to figure out how to do this in Lotus, so I did it the old-fashioned way with pen and paper, using Lotus to do the math!]
Anne Lurie - email: ALurie6171@aol.com
"For just as our human body is composed of billions of cells working together as a single living being, so are the billions of life forms on Earth working together as a living super organism.""Gaia" is taken from the Greek term "Gaea" (Earth Mother) [note: I believe Gaea was mother of the Titans, if I recall. AL]
Soumen Nandy - email: email@example.com
There are actually *two* answers to 13 XOR 13. One is the 'bitwise' answer and the other is the 'bytewise' answer.
In order to understand this, you have to understand the XOR (exclusive OR) function. Though I hate to do it, I think including a few truth tables may actually be the clearest start:
A B | A AND B A OR B A XOR B ----------------------------------- 0 0 | 0 0 0 0 1 | 0 1 1 1 0 | 0 1 1 1 1 | 1 1 0
These logical functions (called Boolean, after Andrew Boole, a mathematician who figured out the mathematics of true or false long before ) are actually extensions of ordinary human language usage.
Boole had to fight against the fact that we are rather sloppy in common speech, and tried to extract the underlying fundamental relations that are useful in human (and human computer) logic.
For example, when we say that you can use nickels, dimes OR quarters in a newspaper vending machines, we mean the 'inclusive' OR -- if a newspaper costs fifty cents, you are not limited to using two quarters, five dimes, or ten nickles -- any combination of the above coins will do.
However if we tell our children that they may have cake OR ice cream, we generally mean the 'exclusive OR' -- either cake or ice cream but not both.
So while "XOR" may sound like its the name of the Military High Expositor of the Perfidian Redactor Fleet, it's actually a far more scrutable (and far less ill-tempered) part of our everyday thought and speech. While 'exclusive OR' might sound as abstreuse as the Academic High Redactor, or as socially aloof as the Socio- political Most-High Rebuffer, it is actually a concept that is readily mastered by toddler. Try telling your toddler "quit jumping on the sofa OR I will exile you to the penal colony on the fourth -- er, I mean -- your room". If you then reward their compliance (hey, I'm an optimist) with a march upstairs, you'll quickly learn that *no* amount of logical sophism will confuse them enough for them to accept that you intended an inclusive and not an exclusive OR.
[Remember this lesson. I have seen many children befuddle their elders into concessions by pretending *not* to understand such 'fine' distinctions -- "but you told me I could get some cake or ice cream out of the freezer!"]
I mentioned that there were two solutions to 13 XOR 13.
One (the more common) is the so-called 'bitwise' XOR -- 13 in binary is "1101" therefore, if we XOR the 13 with itself, we will find that for every bit we are either XOR'ing 0 with 0 or 1 with 1. (No surprise there - we are XORing each bit with itself). In short
1101 or simply, any number bitwise XOR'ed with XOR 1101 itself equals zero. ---- 0000
The less common (but somewhat simpler) meaning would be the "Bytewise" XOR. XOR is a binary function. Both its inputs and its outputs are 'binary' -- having only two values (0/1, true/false, high/low, spank/not-spank, accept-Perfidian-hegemony/disintegrate- into-component-atoms). In this case, the convention is to consider the values to be "zero" and "not-zero". The first 13 is not zero. Remarkably, neither is the second 13. Not-zero XOR Not-zero equals ... zero.
So either way you look at it, 13 XOR 13 is zero, and your days of independence as savages are numbered... er, strike that last part.
Boolean logic leads to some interesting statements (okay, so they are interesting if you've never been subjected to the rather annoying literalisms of a burgeoning logician or computer programmer)
"One and One equals One" (One *plus* One equals two) "Two and Two equals either two (bitwise) or One (bytewise)" "Are you going out or staying home" Reply: "Yes" "Perfidians do not kill. Savages negate their own being by attempting denial of the immutable fact of Perifidian rule" - Soumen "who is not preparing for an impending Perfidian invasion, denies that he looks like a Perfidian High Bombast, AND has no idea how such ridiculous rumors get started"
Last revised November 1, 1997.
Go to Top Menu..
..of Duane & Eva's Old Kentucky Home Page
Please send comments.
All contents copyright (C) 1997, Duane Bristow. All rights reserved.