Xenon's Weblog

Between the Idea and the Reality between the Motion and the Act falls the Shadow T.S. Eliot

1 Satoshi Mihara for the   e  collaboration, review meeting at PSI, Jul 2002 Photon Detector Satoshi Mihara ICEPP, Univ. of Tokyo 1.Large Prototype. – ppt download

3 Satoshi Mihara for the   e  collaboration, review meeting at PSI, Jul MeV Compton Gamma at TERAS Electron Energy:762MeV. Max. current 200mA. 40 MeV (20MeV, and 10MeV) Compton  provided. Beam test in Feb for 2 weeks

Source: 1 Satoshi Mihara for the   e  collaboration, review meeting at PSI, Jul 2002 Photon Detector Satoshi Mihara ICEPP, Univ. of Tokyo 1.Large Prototype. – ppt download

Advertisements

05/01/2017 Posted by | science | Leave a comment

Satoshi MIHARA, U Zuerich Seminar 1 MEG Experiment at PSI Liquid Xenon Photon Detector Satoshi MIHARA ICEPP, Univ. of Tokyo. – ppt download

Satoshi MIHARA, U Zuerich Seminar 3 μ→eγμ→eγ Lepton Flavor Violation (LFV) is strictly forbidden in SM. Neutrino oscillation –LF is not conserved –Contribute ∝ (m /m W ) 4 Supersymmetry –Off-diagonal terms in the slepton mass matrix μ e W   e μ e  Just below the current limit Br(μ→eγ) = 1.2 x (MEGA, PRL 83(1999)83)

Source: Satoshi MIHARA, U Zuerich Seminar 1 MEG Experiment at PSI Liquid Xenon Photon Detector Satoshi MIHARA ICEPP, Univ. of Tokyo. – ppt download

05/01/2017 Posted by | science | Leave a comment

Physicists investigate lower dimensions of the Universe

(PhysOrg.com) — Several speculative theories in physics involve extra dimensions beyond our well-known four (which are broken down into three dimensions of space and one of time). Some theories have suggested 5, 10, 26, or more, with the extra spatial dimensions “hiding” within our observable three dimensions. One thing that all of these extra dimensions have in common is that none has ever been experimentally detected; they are all mathematical predictions.

More recently, physicists have been theorizing the possibility of lower dimensionality, in which the universe has only two or even one spatial dimension(s), along with one dimension of time. The theories suggest that the lower dimensions occurred in the past when the universe was much smaller and had a much higher energy level (and temperature) than today. Further, it appears that the concept of lower dimensions may already have some experimental evidence in cosmic ray observations.

Now in a new study, physicists Jonas Mureika from Loyola Marymount University in Los Angeles, California, and Dejan Stojkovic from SUNY at Buffalo in Buffalo, New York, have proposed a new and independent method for experimentally detecting lower dimensions. They’ve published their study in a recent issue of .

In 2010, a team of physicists including Stojkovic proposed a lower-dimensional framework in which spacetime is fundamentally a (1 + 1)-dimensional universe (meaning it contains one spatial dimension and one time dimension). In other words, the universe is a straight line that is “wrapped up” in such a way so that it appears (3 + 1)-dimensional at today’s higher energy scales, which is what we see.

The scientists don’t know the exact energy levels (or the exact age of the universe) when the transitions between dimensions occurred. However, they think that the universe’s energy level and size directly determine its number of dimensions, and that the number of dimensions evolves over time as the energy and size change. They predict that the transition from a (1 + 1)- to a (2 + 1)-dimensional universe happened when the temperature of the universe was about 100 TeV (teraelectronvolts) or less, and the transition from a (2 + 1)- to a (3 + 1)-dimensional universe happened later at about 1 TeV. Today, the temperature of the universe is about 10-3 eV.

So far, there may already be one piece of experimental evidence for the existence of a lower-dimensional structure at a higher energy scale. When observing families of cosmic ray particles in space, scientists found that, at energies higher than 1 TeV, the main energy fluxes appear to align in a two-dimensional plane. This means that, above a certain energy level, particles propagate in two dimensions rather than .

In the current study, Mureika and Stojkovic have proposed a second test for lower dimensions that would provide independent evidence for their existence. The test is based on the assumption that a (2 + 1)-dimensional spacetime, which is a flat plane, has no gravitational degrees of freedom. This means that gravity waves and gravitons cannot have been produced during this epoch. So the physicists suggest that a future gravitational wave detector looking deep into space might find that primordial gravity waves cannot be produced beyond a certain frequency, and this frequency would represent the transition between dimensions. Looking backwards, it would appear that one of our spatial dimensions has “vanished.”

The scientists added that it should be possible, though perhaps more difficult, to test for the existence of (1 + 1)-dimensional spacetime.

“It will be challenging with the current experiments,” Stojkovic told PhysOrg.com. “But it is within the reach of both the LHC and cosmic ray experiments if the two-dimensional to one-dimensional crossover scale is 10 TeV.”

Lower dimensions at higher energies could have several advantages for cosmologists. For instance, models of quantum gravity in (2 + 1) and (1 + 1) dimensions could overcome some of the problems that plague quantum gravity theories in (3 + 1) dimensions. Also, reducing the dimensions of spacetime might solve the cosmological constant problem, which is that the cosmological constant is fine-tuned to fit observations and does not match theoretical calculations. A solution may lie in the existence of energy that is currently hiding between two folds of our (3 + 1)-dimensional spacetime, which will open up into (4 + 1)-dimensional spacetime in the future when the ’s decreasing energy level reaches another transition point.

“A change of paradigm,” Stojkovic said about the significance of lower dimensions. “It is a new avenue to attack long-standing problems in physics.”

More information: Jonas Mureika and Dejan Stojkovic. “Detecting Vanishing Dimensions via Primordial Gravitational Wave Astronomy.” Physical Review Letters 106, 101101 (2011). DOI: 10.1103/PhysRevLett.106.101101

Copyright 2010 PhysOrg.com.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

23/03/2011 Posted by | science | Leave a comment

The Royal Society’s lost women scientists

  • Richard Holmes
  • The Observer, Sunday 21 November 2010
  • A study of the Royal Society’s archives reveals that women played a far more important role in the development and dissemination of science than had previously been thought, says Richard Holmes

    In December 1788, the astronomer royal, Dr Nevil Maskelyne FRS, wrote effusively to 38-year-old Caroline Herschel congratulating her on being the “first women in the history of the world” to discover not one, but two new comets. No woman since renowned Greek mathematician Hypatia of Alexandria had had such an impact on the sciences. Her celebrity would, as the director of the Paris Observatory, Pierre Méchain, noted, “shine down through the ages”.

    Nevertheless, observed Dr Maskelyne with jocular good humour, he hoped Caroline did not feel too isolated among the male community of astronomers in Britain. He hoped she would not be tempted ride off alone into outer space on “the immense fiery tail” of her new comet. I hope you, dear Miss Caroline, for the benefit of terrestrial astronomy, will not think of taking such a flight, at least till your friends are ready to accompany you.” Or at least until her achievements were recognised by his colleagues in the Royal Society. Curiously, no such recognition was immediately forthcoming.

    All this year, and all round the globe, the Royal Society of London has been celebrating its 350th birthday. In a sense, it has been a celebration of science itself and the social importance of its history. The senior scientific establishment in Britain, and arguably in the world, the Royal Society dates to the time of Charles II. Its early members included Isaac Newton, Edmond Halley, Robert Hooke, Thomas Hobbes, Christopher Wren and even – rather intriguingly – Samuel Pepys. But amid this year’s seminars, exhibitions and publications, there has been one ghost at the feast: the historic absence of women scientists from its ranks.

    Although it was founded in 1660, women were not permitted by statute to become fellows of the Royal Society until 285 years later, in 1945. (An exception was made for Queen Victoria, who was made a royal fellow.) It will be recalled that women over the age of 30 had won the vote nearly 30 years earlier, in 1918. Very similar exclusions operated elsewhere: in the American National Academy of Sciences until 1925; in the Russian National Academy until 1939; and even in that home of Enlightenment science, the Académie des Sciences in France, until 1962. Marie Curie was rejected for membership of the Académie in 1911, the very year she won her second Nobel prize.

    It is also true that by the turn of the 21st century, there had been more than 60 distinguished women fellows of the society. Many have become household names, such as the brilliant crystallographer Dorothy Hodgkin, who famously won a Nobel prize in 1964, and whose whirling portrait by Maggi Hambling (1985) now hangs in the National Portrait Gallery. Her heroic life – she mapped the structure of penicillin and then dedicated 35 years to deciphering the structure of insulin – is told in a superb, biography by Georgina Ferry.

    Yet in Victorian Britain, the very idea of women doing serious science (except botany and perhaps geology) was widely ridiculed and even botany (with its naming of sexual parts) could be regarded as morally perilous. Mary Anning (1799-1847), the great West Country palaeontologist, struggled for years to have her discoveries – such as the plesiosaurus – recognised as her own.

    In March 1860, Thomas Henry Huxley FRS, famed as “Darwin’s bulldog”, wrote privately to his friend, the great geologist Charles Lyell FRS: “Five-sixths of women will stop in the doll stage of evolution, to be the stronghold of parsonism, the drag on civilisation, the degradation of every important pursuit in which they mix themselves – intrigues in politics and friponnes in science.”

    This can be taken as typical of certain Victorian assumptions, including the idea that physiologically the female brain simply could not cope with mathematics, experimental proofs or laboratory procedures. Certainly compared with their literary sisters, the scientific women of the 19th century still appear invisible, if not actually non-existent. What female scientific names can be cited to compare with Jane Austen, Fanny Burney, the three Brontë sisters, George Eliot or Harriet Martineau?

    Yet my re-examination of the Royal Society archives during this 350th birthday year has thrown new and unexpected light on the lost women of science. I have tracked down a series of letters, documents and rare publications that begin to fit together to suggest a very different network of support and understanding between the sexes. It emerges that women had a far more fruitful, if sometimes conflicted, relationship with the Royal Society than has previously been supposed.

    It is at once evident that they played a significant part in many team projects, working both as colleagues and as assistants (though hitherto only acknowledged in their family capacities as wives, sisters or daughters). More crucially, they pioneered new methods of scientific education, not only for children, but for young adults and general readers. They also played a vital part as translators, illustrators and interpreters and, most particularly, as “scientific popularisers”.

    Indeed, the Royal Society archives suggest something so fundamental that it may require a subtle revision of the standard history of science in Britain. This is the previously unsuspected degree to which women were a catalyst in the early discussion of the social role of science. More even than their male colleagues, they had a gift for imagining the human impact of scientific discovery, both exploring and questioning it. Precisely by being excluded from the fellowship of the society, they saw the life of science in a wider world. They raised questions about its duties and its moral responsibilities, its promise and its menace, in ways we can appreciate far more fully today.

    The first woman to attend a meeting of the Royal Society was Margaret Cavendish, the Duchess of Newcastle, in May 1667. There were protests from the all-male fellows – Pepys recorded the scandal – and the dangerous experiment was not repeated for another couple of centuries. But Margaret could take advantage of her position, being the second wife of William Cavendish FRS, a member of one of the great aristocratic dynasties of British science. She knew many of the leading fellows, such as Robert Boyle and Thomas Hobbes. On this occasion, she witnessed several experiments of “colours, loadstones, microscopes” and was “full of admiration”, although according to Pepys, her dress was “so antic and her deportment so unordinary” that the fellows were made strangely uneasy. But this may have been for other reasons.

    Margaret later raised issues that have become perennial. She mocked the dry, empirical approach of the fellows, violently attacked the practice of vivisection and wondered what rational explanation could be given for women’s exclusion from learned bodies. She questioned the Baconian notion of relentless mechanical progress, in favour of gentler Stoic doctrines, in her polemical Observations on Experimental Philosophy (1668). She wrote a lively Memoir, in which she gave an interesting definition of poetry as “mental spinning”, being useful to the scientific mind. She also produced arguably the first-ever science-fiction story, The Blazing World (1666), which considered the alternative futures of science. All this earned her the sobriquet “Mad Madge”.

    The idea of animals having rights within any humane society was recognised early by female scientists. Anna Barbauld, the brilliant young assistant to Joseph Priestley FRS, the great 18th-century chemist, noticed the distress of his laboratory animals as they were steadily deprived of air in glass vacuum jars, during the experiments in which he first discovered oxygen (1774). Accordingly, she wrote a poem in the voice of one of Priestley’s laboratory mice and stuck it in the bars of the mouse’s cage for Priestley to find the next morning. She entitled it: “The Mouse’s Petition to Dr Priestley, Found in the Trap where he had been Confined all Night.

    For here forlorn and sad I sit,
    Within the wiry Grate,
    And tremble at the approaching Morn
    Which brings impending fate…

    The cheerful light, the Vital Air,
    Are blessings widely given;
    Let Nature’s commoners enjoy
    The common gifts of Heaven.

    The well-taught philosophic mind
    To all Compassion gives;
    Casts round the world an Equal eye,
    And feels for all that lives.

    The notion that animals and, indeed, all life-forms on Earth, had a right to “the common gifts of heaven” can be seen as the first stirrings of the whole environmental movement and the demands it now makes upon science and industry.

    By contrast, the first original paper that might be considered as part of a scientific research programme conducted by a woman and published in the Royal Society’s journal, Philosophical Transactions, concerned extraterrestrial phenomena. It was by Caroline Herschel in August 1786, gravely entitled “An Account of a new Comet, in a letter from Miss Caroline Herschel to Mr Charles Blagden MD, Secretary to the Royal Society”. Caroline was sister to William Herschel FRS, the great Romantic astronomer who discovered Uranus and first proposed the existence of galactic systems, such as Andromeda, beyond our own Milky Way. But Caroline’s speciality was discovering new comets, of which she found eight at a time when fewer than 30 were known. Her brother was immensely proud of her, built her special telescopes and helped her to obtain the first state salary for a female astronomer in Britain.

    Even so, William carefully annotated Caroline’s historic paper: “Since my sister’s observations were made by moonlight, twilight, hazy weather, and very near the horizon, it would not be surprising if a mistake had been made.” But it had not. Caroline also kept an observational journal for more than 30 years. This gives not only astronomical data, but emotional data too: it’s an invaluable early view of a brother-sister scientific team at work, including their many trials and heartaches. It is one of the earliest records of how science actually gets done, its secret tribulations as well as its public triumphs.

    Women also saw the educational possibilities of science in a broader context than their male colleagues. Jane Marcet, encouraged by her husband, Alexander Marcet FRS, published the first truly bestselling scientific populariser for young people in 1806. Breezily entitled Conversations in Chemistry, in which the elements of that science are familiarly explained and illustrated by Experiments, it eventually sold as many books as the poetry of Lord Byron (also an FRS). One of its 15 later editions inspired the great 19th-century physicist Michael Faraday FRS to begin his career in science. He started reading the book in 1810, while still working as an apprentice bookbinder and later recalled: “I felt I had got hold of an anchor in chemical knowledge and clung fast to it.”

    Marcet reinvented the dialogue form as a series of imaginary scientific lessons between a teacher “Mrs B” (possible based on a famous astronomer tutor, Margaret Bryan) and her two young women pupils. Emily is observant and rather serious, while Caroline is mischievous but inventive (useful qualities for a young scientist). Caroline continually tempts Mrs B into the more imaginative aspects of science.

    While discussing the composition of water, Mrs B points out that oxygen has “greater affinity” for other elements than hydrogen. Caroline instantly grasps the romantic possibilities of this: “Hydrogen, I see, is like nitrogen, a poor dependent friend of oxygen, which is continually forsaken for greater favourites.” Mrs B starts to reply — “The connection or friendship as you choose to call it is much more intimate between oxygen and hydrogen in the state of water” – then sees where this is going and hastily breaks off: “But this is foreign to our purpose.”

    With a suppressed giggle, Caroline has discovered “sexual chemistry” and the reader will remember forever the composition of a water molecule: two hydrogen atoms in unrequited love with an oxygen atom (H2O). Caroline adds suggestively: “I should extremely like to see water decomposed…” Jane Marcet went on to develop the “Conversations” brand in a series of other books on physiology, botany, natural philosophy and other scientific topics of the day.

    By using dialogue, the “Conversations” brought science popularising a step closer to fiction. Indeed, the most universal of all science fiction novels, Frankenstein, or the Modern Prometheus, was largely inspired by the chemical lectures of Sir Humphry Davy FRS. It was written by a teenage Mary Shelley, who attended Davy’s lectures at the newly founded Royal Institution (which encouraged women members) when she was only 14. Published six years later in 1818, this extraordinary novel of ideas first raised the question of scientists’ social responsibility for their discoveries and inventions.

    Its cruder and more sensational stage adaptations, starting in the 1820s, also popularised the idea of the “mad scientist”, one of the most powerful of all stereotypes. Equally, the term “Frankenstein’s monster” is still frequently used to refer to any scientific advance, particularly in medicine or biology, that is thought to threaten humanity (stem cell research or GM crops). This is a double-edged propaganda weapon, inciting hysteria as much as rational caution and discussion. But Mary Shelley’s creation has proved how vital it is for science to engage with public fears, as well as fantasies.

    The work itself inspired at least one remarkable scientific spin-off, Jane Loudon’s witty novel The Mummy!, published in 1827. Though the title, complete with exclamation mark, gives away the basic plot, the novel is far from anti-scientific. Paradoxically, it celebrates a brilliant array of futuristic inventions and technologies, such as centrally heated streets, cheap, compressed-air balloon travel, houses moved by railway and gas-illuminated safety hats for ladies. After this “wild and strange story”, as she called it, Jane Loudon went on to find more conventional fame as the playful author of The Young Naturalist, the first of many scientific books for children.

    At the very time these novels were making their impact, astronomer John Herschel FRS, the son of William, and a future secretary of the Royal Society, was writing a series of historic letters on a new but crucially related field: the public understanding of science. Unusually, his correspondent was a woman, Mary Fairfax Somerville. Their subject was the possibilities of “popularising” the new cosmology of the great French astronomer Pierre-Simon Laplace, whose work, Mécanique céleste, was regarded as second only to Newton’s Principia.

    Born in 1780, Mary Somerville was the brilliant and charming Scottish wife of William Somerville FRS. She moved freely in Victorian scientific circles and was a friend of the Herschels, Faraday, Charles Babbage FRS, Jane Marcet and Ada Byron. Ada, incidentally, was the poet’s beautiful, headstrong daughter, a considerable mathematician in her own right. She had first explored the theories of Babbage concerning his famous “analytical engine”, by adding her own highly original but technical commentary to a review of his work (which she translated from the French). Though rightly credited with a share in inventing the modern computer, Ada never risked producing a piece of popular science of the sort Mary Somerville was considering.

    During an unhappy first marriage, Mary had taught herself mathematics and had studied both astronomy and painting. She first visited the Herschels’ telescopes at Slough in 1816. She described herself as “intensely ambitious to excel in something, for I felt in my own breast that women were capable of taking a higher place in creation than that assigned to them in my early days, which was very low”. Her first plan was to be a painter.

    In 1828, she was challenged by Lord Brougham to produce a popular summation of the new French astronomy for his philanthropic Society for the Diffusion of Useful Knowledge. After immense and covert labour (“I hid my papers as soon as the bell announced a visitor, lest anyone should discover my secret”), she completed an outstanding translation and interpretation of Laplace’s difficult astronomical book on the structure and mathematics of the solar system, retitling it The Mechanism of the Heavens (1830). Her unscientific friend, novelist Maria Edgeworth, described Mary admiringly: “She has her head in the stars, but feet firm upon earth… intelligent eyes… Scotch accent… the only person in England who understands Laplace.”

    The translation had originally been intended as a much shorter and simpler work. But now, John Herschel encouraged Mary to continue with the full text, but recommended an extended and popular introduction. “The attention of many will be turned to a work from your pen, who will just possess enough mathematical knowledge to be able to read the first [introductory] chapter, without being able to follow you into its applications… were I you, I would devote to this first part at least double the space you have done… I cannot recommend too much, clearness, fullness and order in the exposé of the principles,” he wrote to her in February 1830. Shrewdly, Herschel urged her to continue to think like a painter, to sketch in firm “outlines”, to “illustrate” vividly, to consider the overall composition: “As a painter you will understand my meaning.”

    This long, fluent introductory essay, “A Preliminary Dissertation”, virtually free of any mathematical notation, used brilliant explanatory analogies and metaphors to describe how our solar system was formed and controlled by gravity. While the main translation became the standard textbook for science postgraduates at Cambridge (unheard-of for a woman author), the “Preliminary Dissertation made her famous with a general reading public. Again encouraged by Herschel, she republished it separately in 1832 and it continued to be widely read for the next 50 years.

    Maria Edgeworth singled out its exemplary quality as popular science. “The great simplicity of your manner of writing, I may say of your mind, particularly suits the scientific sublime – which would be destroyed by what is commonly called fine writing. You trust sufficiently to the natural interest of your subject, to the importance of the facts, the beauty of the whole, and the adaptation of the means to the end…” Mary’s sense of wonder made every reader “feel with her”.

    Mary Somerville then set out to write a completely original book, On the Connexion of the Physical Sciences in 1834. In it, she surveyed the whole field of contemporary sciences – chemistry, astronomy, physics – and drew attention to the unity of their underlying principles and methodology. Nothing like this had previously been attempted. As a result of a positive review by William Whewell FRS, the future master of Trinity College, Cambridge, the inclusive term “scientist” was coined. Amazingly, the word had not existed before 1834. This book ran to 10 editions and shaped the progressive idea of science for more than half a century. Physicist James Clerk Maxwell FRS wrote in 1870: “It was one of those suggestive books which put into definitive, intelligible and communicative form the guiding ideas that are already working in the minds of men of science, so as to lead them to discoveries, but which they cannot yet shape into a definitive statement.”

    Mary Somerville went on to write books about the new geography and the ever-expanding world of the microscope: Physical Geography in 1848 and Molecular and Microscopic Science in 1869 (written in her late 80s). Her autobiography, Personal Recollections, was published posthumously in 1873. Throughout her life, she supported women’s suffrage (her signature was the first on JS Mill’s petition to Parliament), campaigned against vivisection and against slavery in America.

    She is now largely remembered because she had an Oxford college named after her in 1879. But in her time she was the greatest of all 19th-century women science writers, known as “the Queen of science” and elected honorary fellow of the Royal Astronomical Society (1835) along with Caroline Herschel. If she never entered into the Royal Society in person, her fine marble bust by Chantrey did so. It was first installed in the Great Hall and now resides in the research library of the Royal Society, flanked by Faraday and Darwin.

    Most important, Mary Somerville became an outstanding model for a later generation of younger women in science. This was notably true of the first great American woman astronomer, Maria Mitchell. Born in 1818 on the remote whaling-station of Nantucket, Maria had a Quaker upbringing, where her scientific interests were encouraged by her schoolmaster father. Three years after her discovery of a new comet in 1847, she was elected first woman member of the American Association for the Advancement of Science, aged only 32.

    Modestly revelling in her newfound celebrity, Maria then toured all the great observatories of Europe, subjecting their various astronomers to her candid, Nantucket eye and salty humour. She visited Greenwich Observatory and the Royal Society, bringing with her as a calling card the first known photograph of a star. For the most part, she was enthusiastically received, especially by the kindly John Herschel, though she was “riled” by Whewell’s chauvinist teasing while dining at Trinity College high table. She was also amazed to be told by Sir George Airy FRS, the British astronomer royal: “In England, there is no astronomical public and we do not need to make science popular.”

    Undaunted, Maria pressed on to meet Mary Somerville, the great object of her tour, who now lived in Rome. She was disconcerted to find the Vatican observatory closed to women after dark, a distinct setback for a professional astronomer. (“I was told that Mrs Somerville, the most learned woman in all Europe, had been denied admission – she could not enter an observatory that was at the same time a monastery.”) But she was captivated by Mary Somerville, both by her directness and by her fantastic range of interests.

    “Mrs Somerville’s conversation was marked by great simplicity, with no tendency to the essay style. She touched upon the recent discoveries in chemistry, of the discovery of gold in California, of the nebulae, of comets, of the satellites, of the planets…” To Maria’s satisfaction, she also “spoke with disapprobation of Dr Whewell’s attempt to prove that our planet was the only one inhabited by reasoning beings…”

    Maria later wrote an essay in Mary Somerville’s praise. Like her heroine, Maria identified with the anti-slavery cause and the female suffragist movements. But being a generation younger, Maria Mitchell was far more assertive than Mary Somerville about the vital importance of women actually doing science. In fact, she took the Royal Society’s motto, Nullius in verba (“Take nobody’s word for it”) to have a particular relevance to the value of science for women. Too often, and for too long, those “words had been male.

    “The great gain would be freedom of thought. Women, more than men, are bound by tradition and authority. What the father, the brother, the doctor and the minister have said has been received undoubtingly. Until women throw off this reverence for authority they will not develop. When they do this, when they come to the truth through their investigations, when doubt leads them to discover, the truth which they get will be theirs and their minds will work on and on, unfettered.”

    When appointed first professor of astronomy at Vassar in 1865 aged 47, Mitchell installed a symbolic bust of Somerville in her famous teaching observatory, just as the Royal Society had done, though with more didactic intent. Beneath its gaze she mentored a brilliant circle of devoted female students to take up the baton of astronomy.

    For all her gifts, Mary Somerville had denied women’s ability to do original science. In Chapter 11 of her Personal Recollections, she had written ruefully: “I was conscious that I had never made a discovery myself, I had no originality. I have perseverance and intelligence, but no genius. That spark from heaven is not granted to [my] sex… whether higher powers may be allotted to us in another state of existence, God knows, original genius in science at least is hopeless in this. At all events it has not yet appeared in the higher branches of science.”

    Maria Mitchell, as a great science teacher, came to think this conclusion was a historic mistake and emphatically told her students why, in her celebrated Vassar lectures: “The laws of nature are not discovered by accident; theories do not come by chance, even to the greatest minds, they are not born in the hurry and worry of daily toil, they are diligently sought… and until able women have given their lives to investigation, it is idle to discuss their capacity for original work.”

    With the inspiring examples of both Somerville and Mitchell, the work of popularisation rapidly expanded in Victorian England after 1860. Mary Ward, cousin to astronomer William Parsons FRS, brought out beautifully illustrated books describing the history and function of scientific instruments: The Microscope in 1868 and The Telescope in 1869. Both these books carried extensive advertisements for these instruments, priced from 10 guineas, now within the reach of ordinary families.

    Arabella Buckley, who had been assistant to Charles Lyell FRS, used her experience to write one of the earliest general surveys of science for young people, entitled A Short History of Natural Science, and of the Progress of Discovery From the Time of the Greeks to the Present Day (1876). Margaret Gatty, building further on the tradition of scientific tales and wonders, published her semi-fictional series Parables From Nature, which ran to an astonishing 18 editions between 1855 and 1882.

    The social impact of Darwin’s On the Origin of Species (1859) produced a mixed response from the women popularisers. On the one hand, there was Agnes Giberne’s Sun, Moon and Stars (1880), which might be described as a work of Romantic revisionism, prefacing each chapter with biblical quotations and emphasising the divine order and established hierarchy in the universe. “When I consider Thy heavens, the work of Thy fingers, what is man that Thou art mindful of him, or the son of man that thou visitest him?” Her book was written hopefully “for children, working men or even grown-up people of the educated classes”.

    On the other hand, there was Alice Bodington, the fearless Darwinian author of Studies in Evolution (1890). Her polemical and provocative essays in the radical Westminster Review followed the social and theological implications of Darwin’s theories. In Religion, Reason and Agnosticism (1893), she remarks that the “destruction of old creeds” by science must lead to a new, but necessary scepticism. In this unusually frank essay, she confesses her own loss of faith. “Deep as the microscope can fathom, far as the telescope and spectroscope can take us into the universe, we see evidence of unvarying law… not of the personal interference of the Deity.” Evolution had destroyed the idea of intelligent design, “once regarded as the cornerstone of natural religion”. Without science, Alice Bodington felt, religion was “as a fairy tale or an opium dream, delusive though exquisitely fair, it can give no permanent support, no real comfort”.

    Yet, paradoxically, science itself might eventually show that “the extraordinary, the unique instinct of religion” in mankind was itself evolving “from the lowest fetish worship” to some “unimaginable glory”. In this, the progressive understanding of science would lead not to philosophic despair, but to continuous hope and wonder in the universe. “The deathless instinct of religion bids us not despair.”

    By the turn of the century, major science was indeed being done by women, just as Mary Somerville had hoped and Maria Mitchell had firmly predicted. In the 1880s, Margaret Huggins, an early expert on stellar photography, began to sign Royal Society papers on spectroscopy jointly with her husband, Sir William Huggins FRS. Hertha Ayrton was producing highly original work on the electric arc. Although rejected as a fellow in 1902 (amid a stormy debate that Margaret Cavendish would have relished), Hertha was the first woman to read her own paper at a Royal Society meeting two years later and was awarded the Royal Society’s Hughes Medal in 1906. Slowly, the tide was turning. The Lost Women of Science were about to be found.

    Yet these remarkable women had already added a third dimension to the whole scientific enterprise as originally conceived by Lord Bacon, and by the founding fathers of the Royal Society. The two primary aims of science had long been established as the discovery of the nature of physical reality “by experiment and proof” and the applications of such discoveries “for the relief of man’s estate”. In Bacon’s terms, science should bring “light” and science should bear “fruit”.

    The Lost Women had helped to add a third, fundamental imperative. Science should sow “seeds”. Science should broadcast, should disperse the seeds of knowledge to all and as imaginatively as possible. Science, and the scientific method, should become a new means of general education and enlightenment, not merely for the elite. Until scientific knowledge was explained, explored and widely understood by the population at large, the work of scientists would always be incomplete. This third dimension or imperative added a radical new parameter to both the practice and the philosophy of science in Britain.

    As Maria Mitchell had put it, with one of her famous smiles: “We especially need imagination in science. It is not all mathematics, nor all logic, but it is somewhat beauty and poetry.”

    Richard Holmes’s The Age of Wonder won the Royal Society’s Science Books Prize for 2009; its sequel, The Lost Women of Victorian Science, will be published by HarperCollins and Pantheon USA

    28/11/2010 Posted by | science | Leave a comment

    “Death of the Open Web”?

    Those words have an ominous ring for those of us who have a deep appreciation of the Internet as well as high hopes for its future. The phrase comes from the title of a recent New York Times article that struck a nerve with some readers. The article paints a disquieting picture of the web as a “haphazardly planned” digital city where “malware and spam have turned living conditions in many quarters unsafe and unsanitary.”

    There is a growing sentiment that the open web is a fundamentally dangerous place. Recent waves of hacked WordPress sites revealed exploited PHP vulnerabilities and affected dozens of well-known designers and bloggers like Chris Pearson. The tools used by those with malicious intent evolve just as quickly as the rest of the web. It’s deeply saddening to hear that, according to Jonathan Zittrain, some web users have stooped so low as to set up ‘Captcha sweatshops’ where (very) low-paid people are employed to solve Captcha security technology for malicious purposes all day. This is the part where I weep for the inherent sadness of mankind.

    “If we don’t do something about this,” says Jonathan Zittrain of the insecure web, “I see the end of much of the generative aspect of the technologies that we now take for granted.” Zittrain is a professor of Internet governance and regulation at Oxford University and the author of The Future of the Internet: and How to Stop It; watch his riveting Google Talk on these subjects.

    The Wild West: mainstream media’s favorite metaphor for today’s Internet

    The result of the Internet’s vulnerability is a generation of Internet-centric products — like the iPad, the Tivo and the XBOX — that are not easily modified by anyone except their vendors and their approved partners. These products do not allow unapproved third-party code (such as the kind that could be used to install a virus) to run on them, and are therefore more reliable than some areas of the web. Increased security often means restricted or censored content — and even worse — limited freedoms that could impede the style of innovation that propels the evolution of the Internet, and therefore, our digital future.

    The web of 2010 is a place where a 17 year-old high school student can have an idea for a website, program it in three days, and quickly turn it into a social networking craze used by millions (that student’s name is Andrey Ternovskiy and he invented Chatroulette). That’s innovation in a nutshell. It’s a charming story and a compelling use of the web’s creative freedoms. If the security risks of the Internet kill the ‘open web’ and turn your average web experience into one that is governed by Apple or another proprietary company, the Andrey Ternovskiys of the world may never get their chance to innovate.

    Security Solutions

    We champion innovation on the Internet and it’s going to require innovation to steer it in the right direction. Jonathan Zittrain says that he hopes we can come together on agreements for regulating the open web so that we don’t “feel that we have to lock down our technologies in order to save our future.”

    According to Vint Cerf, vice president and Chief Internet Evangelist at Google, “I think we’re going to end up looking for international agreements – maybe even treaties of some kind – in which certain classes of behavior are uniformly considered inappropriate.”

    Perhaps the future of the Internet involves social structures of web users who collaborate on solutions to online security issues. Perhaps companies like Google and Apple will team up with international governmental bodies to form an international online security council. Or maybe the innovative spirit of the web could mean that an independent, democratic group of digital security experts, designers, and programmers will form a grassroots-level organization that rises to prominence while fighting hackers, innovating on security technology, writing manifestos for online behavior, and setting an example through positive and supportive life online.

    Many people are fighting to ensure your ability to have your voice heard online — so use that voice to participate in the debate, stay informed, and demand a positive future. Concerned netizens and Smashing readers: unite!

    12/08/2010 Posted by | science | | Leave a comment

    The 10 biggest moments in IT history

    Posted by Larry Dignan @ 2:25 am

    Despite its relatively short lifespan, IT has had some huge watershed moments. TechRepublic’s Jack Wallen followed the tech timeline to identify the most pivotal events.

    It’s unlikely that everyone will ever agree on the most important dates in the history of IT. I know my IT timeline has a personal and professional bias. But I’ve tried to be objective in examining the events that have served to shape the current landscape of the modern computing industry. Some of the milestones on my list are debatable (depending upon where you are looking from), but some of them most likely are not. Read on and see what you think.

    1: The development of COBOL (1959)

    There are many languages out there, but none has influenced as many others as COBOL has. What makes COBOL stand out is the fact that there are still machines chugging along, running COBOL apps. Yes, these apps could (and possibly should) be rewritten to a modern standard. But for many IT administrators, those who don’t have the time or resources to rewrite legacy apps, those programs can keep on keeping on.

    2: The development of the ARPANET (1969)

    It is an undeniable fact that the ARPANET was the predecessor of the modern Internet. The ARPANET began in a series of memos, written by J.C. R. Licklider and initially referred to as the “Intergalactic Computer Network.” Without the development of the ARPANET, the landscape of IT would be drastically different.

    3: The creation of UNIX (1970)

    Although many would argue that Windows is the most important operating system ever created, UNIX should hold that title. UNIX started as a project between MIT and AT&T Bell Labs. The biggest initial difference (and most important distinction) was that it was the first operating system to allow more than one user to log in at a time. Thus was born the multi-user environment. Note: 1970 marks the date the name “UNIX” was applied.

    4: The first “clamshell” laptop (1979)

    William Moggridge, working for GRID Systems Corporation, designed the Compass Computer, which finally entered the market in 1991. Tandy quickly purchased GRID (because of 20 significant patents it held) but then turned around and resold GRID to AST, retaining the rights to the patents.

    5: The beginning of Linus Torvalds’ work on Linux (1991)

    No matter where you stand on the Linux versus Windows debate, you can’t deny the importance of the flagship open source operating system. Linux brought the GPL and open source into the forefront and forced many companies (and legal systems) into seeing monopolistic practices as well as raising the bar for competition. Linux was also the first operating system that allowed students and small companies to think in much bigger ways than their budgets previously allowed them to think.

    6: The advent of Windows 95 (1995)

    Without a doubt, Windows 95 reshaped the way the desktop looked and felt. When Windows 95 hit the market the metaphor for the desktop became standardized with the toolbar, start menu, desktop icons, and notification area. All other operating systems would begin to mimic this new de facto standard desktop.

    7: The 90s dot-com bubble (1990s)

    The dot-com bubble of the 90s did one thing that nothing else had ever done: It showed that a great idea could get legs and become a reality. Companies like Amazon and Google not only survived the dot-com burst but grew to be megapowers that have significant influence over how business is run in the modern world. But the dot-com bubble did more than bring us companies — it showed us the significance of technology and how it can make daily life faster, better, and more powerful.

    8: Steve Jobs rejoining Apple (1996)

    Really, all I should need to say here is one word: iPod. Had Jobs not come back to Apple, the iPod most likely would never have been brought to life. Had the iPod not been brought to life, Apple would have withered away. Without Apple, OS X would never have seen the light of day. And without OS X, the operating system landscape would be limited to Windows and Linux.

    9: The creation of Napster (1999)

    File sharing. No matter where you stand on the legality of this issue, you can’t deny the importance of P2P file sharing. Without Napster, file sharing would have taken a much different shape. Napster (and the original P2P protocols) heavily influenced the creation of the BitTorrent protocol. Torrents now make up nearly one-third of all data traffic and make sharing of large files easy. Napster also led to the rethinking of digital rights (which to some has negative implications).

    10: The start of Wikipedia (2000)

    Wikipedia has become one of leading sources of information on the Internet and with good reason. It’s the single largest collaborative resource available to the public. Wikipedia has since become one of the most often cited sources on the planet. Although many schools refuse to accept Wiki resources (questioning the legitimacy of the sources) Wikipedia is, without a doubt, one of the largest and most accessible collections of information. It was even instrumental in the 2008 U.S. presidential election, when the candidates’ Wiki pages became the top hits for voters seeking information. These presidential Wiki pages became as important to the 2008 election as any advertisement.

    What’s missing?

    Were there other important events in the timeline of IT? Sure. But I think few, if any, had more to do with shaping modern computing than the above 10 entries. What’s your take? If you had to list 10 of the most important events (or inventions) of modern computing, what would they be? Share your thoughts with fellow TechRepublic members.


    Check out 10 Things… the newsletter

    Get the key facts on a wide range of technologies, techniques, strategies, and skills with the help of the concise need-to-know lists featured in TechRepublic’s 10 Things newsletter, delivered every Friday. Automatically sign up today.

    Larry DignanLarry Dignan is Editor in Chief of ZDNet and Smart Planet as well as Editorial Director of ZDNet sister site TechRepublic. See his full profile and disclosure of his industry affiliations.

    Follow Larry on Twitter.

    Email Larry Dignan

    22/04/2010 Posted by | science | , | Leave a comment

    A Primer on the Great Proton Smashup

    R.O. Blechman

    By DENNIS OVERBYE
    Published: April 2, 2010

    Yes, the collider finally crashed subatomic particles into one another last week, but why, exactly, is that important? Here is a primer on the collider – with just enough information, hopefully, to impress guests at your next cocktail party.

    Let’s be basic. What does a particle physicist do?

    Particle physicists have one trick that they do over and over again, which is to smash things together and watch what comes tumbling out.

    What does it mean to say that the collider will allow physicists to go back to the Big Bang? Is the collider a time machine?

    Physicists suspect that the laws of physics evolved as the universe cooled from billions or trillions of degrees in the first moments of the Big Bang to superfrigid temperatures today (3 degrees Kelvin) — the way water changes from steam to liquid to ice as temperatures decline. As the universe cooled, physicists suspect, everything became more complicated. Particles and forces once indistinguishable developed their own identities, the way Spanish, French and Italian diverged from the original Latin.

    By crashing together subatomic particles — protons — physicists create little fireballs that revisit the conditions of these earlier times and see what might have gone on back then, sort of like the scientists in Jurassic Park reincarnating dinosaurs.

    The collider, which is outside Geneva, is 17 miles around. Why is it so big?

    Einstein taught us that energy and mass are equivalent. So, the more energy packed into a fireball, the more massive it becomes. The collider has to be big and powerful enough to pack tremendous amounts of energy into a proton.

    Moreover, the faster the particles travel, the harder it is to bend their paths in a circle, so that they come back around and bang into each other. The collider is designed so that protons travel down the centers of powerful electromagnets that are the size of redwood trunks, which bend the particles’ paths into circles, creating a collision. Although the electromagnets are among the strongest ever built, they still can’t achieve a turning radius for the protons of less than 2.7 miles.

    All in all, the bigger the accelerator, the bigger the crash, and the better chance of seeing what is on nature’s menu.

    What are physicists hoping to see?

    According to some theories, a whole list of items that haven’t been seen yet — with names like gluinos, photinos, squarks and winos — because we haven’t had enough energy to create a big enough collision.

    Any one of these particles, if they exist, could constitute the clouds of dark matter, which, astronomers tell us, produce the gravity that holds galaxies and other cosmic structures together.

    Another missing link of physics is a particle known as the Higgs boson, after Peter Higgs of the University of Edinburgh, which imbues other particles with mass by creating a cosmic molasses that sticks to them and bulks them up as they travel along, not unlike the way an entourage forms around a rock star when they walk into a club.

    Have scientists ever seen dark matter?

    It’s invisible, but astronomers have deduced from their measurements of galactic motions that the visible elements of the cosmos, like galaxies, are embedded in huge clouds of it.

    Will physicists see these gluinos, photinos, squarks and winos?

    There is no guarantee that any will be discovered, which is what makes science fun, as well as nerve-racking.

    So how much energy do you need to create these fireballs?

    At the Large Hadron Collider, that energy is now 3.5 trillion electron volts per proton — about as much energy as a flea requires to do a pushup. That may not sound like much, but for a tiny proton, it is a lot of energy. It is the equivalent of a 200-pound man bulking up by 700,000 pounds.

    What’s an electron volt?

    An electron volt is the amount of energy an electron would gain passing from the negative to the positive side of a one-volt battery. It is the basic unit of energy and of mass preferred by physicists.

    When protons collide, is there a big bang?

    There is no sound. It’s not like a bomb exploding.

    In previous trials, there was an actual explosion.

    All that current is dangerous. During the testing of the collider in September 2008, the electrical connection between a pair of the giant magnets vaporized. There are thousands of such connections in the collider, many of which are now believed to be defective. As a result the collider can only run at half-power for the next two years.

    Could the collider make a black hole and destroy the Earth?

    The collider is not going to do anything that high-energy cosmic rays have not done repeatedly on Earth and elsewhere in the universe. There is no evidence that such collisions have created black holes or that, if they have, the black holes have caused any damage. According to even the most speculative string theory variations on black holes, the Large Hadron Collider is not strong enough to produce a black hole.

    Too bad, because many physicists would dearly like to see one.

    An earlier version of this article misstated that the Earth began to cool in the aftermath of the Big Bang.

    A version of this article appeared in print on April 4, 2010, on page WK3 of the New York edition.

    http://www.nytimes.com/2010/04/04/weekinreview/04overbye.html

    06/04/2010 Posted by | science | , , | Leave a comment

    Helium clue found in echo of the Big Bang

    New Scientist Magazine issue 2746,  08 February 2010 by Rachel Courtland

    Read more: Found: Hawking’s initials written into the universe

    THE subtle signal of ancient helium has shown up for the first time in light left over from the big bang. The discovery will help astronomers work out how much of the stuff was made during the big bang and how much was made later by stars.

    Helium is the second-most abundant element in the universe after hydrogen. The light emitted by old stars and clumps of hot pristine gas from the early universe suggest helium made up some 25 per cent of the ordinary matter created during the big bang.

    The new data provides another measure. A trio of telescopes has found helium’s signature in the cosmic microwave background (CMB, pictured), radiation emitted some 380,000 years after the big bang. The patterns in this radiation are an important indicator of the processes at work at that time. Helium affects the pattern because it is heavier than hydrogen and so alters the way pressure waves must have travelled through the young cosmos. But helium’s effect on the CMB was on a scale too small to resolve until now.

    By combining seven years of data from NASA’s Wilkinson Microwave Anisotropy Probe with observations by two telescopes at the South Pole, astronomers have confirmed its presence. “This is the first detection of pre-stellar helium,” says WMAP’s chief scientist, Charles Bennett.

    These observations are in line with earlier measurements, although less accurate. “I think CMB measurements will surpass them eventually,” says team member David Spergel.

    More accurate numbers could reveal how quickly the early universe expanded. Helium forms from the interaction between protons and neutrons. This is constrained by the number of available neutrons, which would have dropped during the time the brand new universe was expanding as they decayed into protons. So the amount of helium that formed places important limits on how quickly this expansion took place. That could help test theories that postulate extra dimensions or as-yet-unseen particles.

    Better data should be available in the next few years. The European Space Agency’s Planck satellite, which launched last year, is poised to measure the amount of helium even more precisely.

    09/02/2010 Posted by | science | , | Leave a comment

    Telescope Finds Galaxy’s Most Massive Star Yet

    09/02/2010 Posted by | science | , , | Leave a comment

    Wired Science News for Your Neurons Ultra-Precise Quantum-Logic Clock Trumps Old Atomic Clock

    09/02/2010 Posted by | science | Leave a comment

    Our world may be a giant hologram

    New Scientist  –  15 January 2009  by  Marcus Chown

    DRIVING through the countryside south of Hanover, it would be easy to miss the GEO600 experiment. From the outside, it doesn’t look much: in the corner of a field stands an assortment of boxy temporary buildings, from which two long trenches emerge, at a right angle to each other, covered with corrugated iron. Underneath the metal sheets, however, lies a detector that stretches for 600 metres.

    For the past seven years, this German set-up has been looking for gravitational waves – ripples in space-time thrown off by super-dense astronomical objects such as neutron stars and black holes. GEO600 has not detected any gravitational waves so far, but it might inadvertently have made the most important discovery in physics for half a century.

    For many months, the GEO600 team-members had been scratching their heads over inexplicable noise that is plaguing their giant detector. Then, out of the blue, a researcher approached them with an explanation. In fact, he had even predicted the noise before he knew they were detecting it. According to Craig Hogan, a physicist at the Fermilab particle physics lab in Batavia, Illinois, GEO600 has stumbled upon the fundamental limit of space-time – the point where space-time stops behaving like the smooth continuum Einstein described and instead dissolves into “grains”, just as a newspaper photograph dissolves into dots as you zoom in. “It looks like GEO600 is being buffeted by the microscopic quantum convulsions of space-time,” says Hogan.

    If this doesn’t blow your socks off, then Hogan, who has just been appointed director of Fermilab’s Center for Particle Astrophysics, has an even bigger shock in store: “If the GEO600 result is what I suspect it is, then we are all living in a giant cosmic hologram.”

    The idea that we live in a hologram probably sounds absurd, but it is a natural extension of our best understanding of black holes, and something with a pretty firm theoretical footing. It has also been surprisingly helpful for physicists wrestling with theories of how the universe works at its most fundamental level.

    The holograms you find on credit cards and banknotes are etched on two-dimensional plastic films. When light bounces off them, it recreates the appearance of a 3D image. In the 1990s physicists Leonard Susskind and Nobel prizewinner Gerard ‘t Hooft suggested that the same principle might apply to the universe as a whole. Our everyday experience might itself be a holographic projection of physical processes that take place on a distant, 2D surface.

    The “holographic principle” challenges our sensibilities. It seems hard to believe that you woke up, brushed your teeth and are reading this article because of something happening on the boundary of the universe. No one knows what it would mean for us if we really do live in a hologram, yet theorists have good reasons to believe that many aspects of the holographic principle are true.

    Susskind and ‘t Hooft’s remarkable idea was motivated by ground-breaking work on black holes by Jacob Bekenstein of the Hebrew University of Jerusalem in Israel and Stephen Hawking at the University of Cambridge. In the mid-1970s, Hawking showed that black holes are in fact not entirely “black” but instead slowly emit radiation, which causes them to evaporate and eventually disappear. This poses a puzzle, because Hawking radiation does not convey any information about the interior of a black hole. When the black hole has gone, all the information about the star that collapsed to form the black hole has vanished, which contradicts the widely affirmed principle that information cannot be destroyed. This is known as the black hole information paradox.

    Bekenstein’s work provided an important clue in resolving the paradox. He discovered that a black hole’s entropy – which is synonymous with its information content – is proportional to the surface area of its event horizon. This is the theoretical surface that cloaks the black hole and marks the point of no return for infalling matter or light. Theorists have since shown that microscopic quantum ripples at the event horizon can encode the information inside the black hole, so there is no mysterious information loss as the black hole evaporates.

    Crucially, this provides a deep physical insight: the 3D information about a precursor star can be completely encoded in the 2D horizon of the subsequent black hole – not unlike the 3D image of an object being encoded in a 2D hologram. Susskind and ‘t Hooft extended the insight to the universe as a whole on the basis that the cosmos has a horizon too – the boundary from beyond which light has not had time to reach us in the 13.7-billion-year lifespan of the universe. What’s more, work by several string theorists, most notably Juan Maldacena at the Institute for Advanced Study in Princeton, has confirmed that the idea is on the right track. He showed that the physics inside a hypothetical universe with five dimensions and shaped like a Pringle is the same as the physics taking place on the four-dimensional boundary.

    According to Hogan, the holographic principle radically changes our picture of space-time. Theoretical physicists have long believed that quantum effects will cause space-time to convulse wildly on the tiniest scales. At this magnification, the fabric of space-time becomes grainy and is ultimately made of tiny units rather like pixels, but a hundred billion billion times smaller than a proton. This distance is known as the Planck length, a mere 10-35 metres. The Planck length is far beyond the reach of any conceivable experiment, so nobody dared dream that the graininess of space-time might be discernable.

    That is, not until Hogan realised that the holographic principle changes everything. If space-time is a grainy hologram, then you can think of the universe as a sphere whose outer surface is papered in Planck length-sized squares, each containing one bit of information. The holographic principle says that the amount of information papering the outside must match the number of bits contained inside the volume of the universe.

    Since the volume of the spherical universe is much bigger than its outer surface, how could this be true? Hogan realised that in order to have the same number of bits inside the universe as on the boundary, the world inside must be made up of grains bigger than the Planck length. “Or, to put it another way, a holographic universe is blurry,” says Hogan.

    This is good news for anyone trying to probe the smallest unit of space-time. “Contrary to all expectations, it brings its microscopic quantum structure within reach of current experiments,” says Hogan. So while the Planck length is too small for experiments to detect, the holographic “projection” of that graininess could be much, much larger, at around 10-16 metres. “If you lived inside a hologram, you could tell by measuring the blurring,” he says.

    When Hogan first realised this, he wondered if any experiment might be able to detect the holographic blurriness of space-time. That’s where GEO600 comes in.

    Gravitational wave detectors like GEO600 are essentially fantastically sensitive rulers. The idea is that if a gravitational wave passes through GEO600, it will alternately stretch space in one direction and squeeze it in another. To measure this, the GEO600 team fires a single laser through a half-silvered mirror called a beam splitter. This divides the light into two beams, which pass down the instrument’s 600-metre perpendicular arms and bounce back again. The returning light beams merge together at the beam splitter and create an interference pattern of light and dark regions where the light waves either cancel out or reinforce each other. Any shift in the position of those regions tells you that the relative lengths of the arms has changed.

    “The key thing is that such experiments are sensitive to changes in the length of the rulers that are far smaller than the diameter of a proton,” says Hogan.

    So would they be able to detect a holographic projection of grainy space-time? Of the five gravitational wave detectors around the world, Hogan realised that the Anglo-German GEO600 experiment ought to be the most sensitive to what he had in mind. He predicted that if the experiment’s beam splitter is buffeted by the quantum convulsions of space-time, this will show up in its measurements (Physical Review D, vol 77, p 104031). “This random jitter would cause noise in the laser light signal,” says Hogan.

    In June he sent his prediction to the GEO600 team. “Incredibly, I discovered that the experiment was picking up unexpected noise,” says Hogan. GEO600’s principal investigator Karsten Danzmann of the Max Planck Institute for Gravitational Physics in Potsdam, Germany, and also the University of Hanover, admits that the excess noise, with frequencies of between 300 and 1500 hertz, had been bothering the team for a long time. He replied to Hogan and sent him a plot of the noise. “It looked exactly the same as my prediction,” says Hogan. “It was as if the beam splitter had an extra sideways jitter.”

    No one – including Hogan – is yet claiming that GEO600 has found evidence that we live in a holographic universe. It is far too soon to say. “There could still be a mundane source of the noise,” Hogan admits.

    Gravitational-wave detectors are extremely sensitive, so those who operate them have to work harder than most to rule out noise. They have to take into account passing clouds, distant traffic, seismological rumbles and many, many other sources that could mask a real signal. “The daily business of improving the sensitivity of these experiments always throws up some excess noise,” says Danzmann. “We work to identify its cause, get rid of it and tackle the next source of excess noise.” At present there are no clear candidate sources for the noise GEO600 is experiencing. “In this respect I would consider the present situation unpleasant, but not really worrying.”

    For a while, the GEO600 team thought the noise Hogan was interested in was caused by fluctuations in temperature across the beam splitter. However, the team worked out that this could account for only one-third of the noise at most.

    Danzmann says several planned upgrades should improve the sensitivity of GEO600 and eliminate some possible experimental sources of excess noise. “If the noise remains where it is now after these measures, then we have to think again,” he says.

    If GEO600 really has discovered holographic noise from quantum convulsions of space-time, then it presents a double-edged sword for gravitational wave researchers. One on hand, the noise will handicap their attempts to detect gravitational waves. On the other, it could represent an even more fundamental discovery.

    Such a situation would not be unprecedented in physics. Giant detectors built to look for a hypothetical form of radioactivity in which protons decay never found such a thing. Instead, they discovered that neutrinos can change from one type into another – arguably more important because it could tell us how the universe came to be filled with matter and not antimatter (New Scientist, 12 April 2008, p 26).

    It would be ironic if an instrument built to detect something as vast as astrophysical sources of gravitational waves inadvertently detected the minuscule graininess of space-time. “Speaking as a fundamental physicist, I see discovering holographic noise as far more interesting,” says Hogan.

    Small price to pay

    Despite the fact that if Hogan is right, and holographic noise will spoil GEO600’s ability to detect gravitational waves, Danzmann is upbeat. “Even if it limits GEO600’s sensitivity in some frequency range, it would be a price we would be happy to pay in return for the first detection of the graininess of space-time.” he says. “You bet we would be pleased. It would be one of the most remarkable discoveries in a long time.”

    However Danzmann is cautious about Hogan’s proposal and believes more theoretical work needs to be done. “It’s intriguing,” he says. “But it’s not really a theory yet, more just an idea.” Like many others, Danzmann agrees it is too early to make any definitive claims. “Let’s wait and see,” he says. “We think it’s at least a year too early to get excited.”

    The longer the puzzle remains, however, the stronger the motivation becomes to build a dedicated instrument to probe holographic noise. John Cramer of the University of Washington in Seattle agrees. It was a “lucky accident” that Hogan’s predictions could be connected to the GEO600 experiment, he says. “It seems clear that much better experimental investigations could be mounted if they were focused specifically on the measurement and characterisation of holographic noise and related phenomena.”

    One possibility, according to Hogan, would be to use a device called an atom interferometer. These operate using the same principle as laser-based detectors but use beams made of ultracold atoms rather than laser light. Because atoms can behave as waves with a much smaller wavelength than light, atom interferometers are significantly smaller and therefore cheaper to build than their gravitational-wave-detector counterparts.

    So what would it mean it if holographic noise has been found? Cramer likens it to the discovery of unexpected noise by an antenna at Bell Labs in New Jersey in 1964. That noise turned out to be the cosmic microwave background, the afterglow of the big bang fireball. “Not only did it earn Arno Penzias and Robert Wilson a Nobel prize, but it confirmed the big bang and opened up a whole field of cosmology,” says Cramer.

    Hogan is more specific. “Forget Quantum of Solace, we would have directly observed the quantum of time,” says Hogan. “It’s the smallest possible interval of time – the Planck length divided by the speed of light.”

    More importantly, confirming the holographic principle would be a big help to researchers trying to unite quantum mechanics and Einstein’s theory of gravity. Today the most popular approach to quantum gravity is string theory, which researchers hope could describe happenings in the universe at the most fundamental level. But it is not the only show in town. “Holographic space-time is used in certain approaches to quantising gravity that have a strong connection to string theory,” says Cramer. “Consequently, some quantum gravity theories might be falsified and others reinforced.”

    Hogan agrees that if the holographic principle is confirmed, it rules out all approaches to quantum gravity that do not incorporate the holographic principle. Conversely, it would be a boost for those that do – including some derived from string theory and something called matrix theory. “Ultimately, we may have our first indication of how space-time emerges out of quantum theory.” As serendipitous discoveries go, it’s hard to get more ground-breaking than that.

    Check out other weird cosmology features from New Scientist

    Marcus Chown is the author of Quantum Theory Cannot Hurt You (Faber, 2008)

    09/02/2010 Posted by | science | | Leave a comment

    Relative Histories Formulation of Quantum Mechanics

    David Strayhorn   ( http://webspace.webring.com/people/xy/yapquack/RelativeHistories.html )
    Saint Louis, MO

    A novel formulation of quantum mechanics, the “relative histories” formulation (RHF), is proposed based upon the assumption that the physical state of a closed system – e.g., the universe – can be represented mathematically by an ensemble E of four-dimensional manifolds W. In this scheme, any real, physical object – be it macroscopic or microscopic – can be assigned to the role of the quantum mechanical “observer.” The state of the observer is represented as a 3-dimensional manifold, O. By using O as a boundary condition, we obtain E as the unique set of all W that satisfy the boundary condition defined by O. The evolution of the “wavefunction of the universe” E is therefore determined by the movement of the observer O through state space.

    In a sense, the RHF is a specific instance of the consistent histories formulation, in which a single “history” is equated with a single W. However, the RHF can also be interpreted as an implementation of Einstein’s ensemble (or statistical) interpretation, which is based upon the notion that the wavefunction is to be understood as the description not of a single system, but of an ensemble of systems: in our case, an ensemble of W’s. We could, in addition, think of the RHF in terms of Everett’s relative state formulation (i.e., the multiple worlds interpretation (MWI)), in which each “world” is equated with one of the four-manifolds W.

    The mathamatical underpinnings of the RHF have been developed to an extent sufficient to demonstrate that the RHF gives rise to quantum statistics. However, the mathematical structure of the RHF is far from complete. At this stage, the primary motivation for the development of the RHF centers on the fact that it offers a constellation of interpretational advantages that are not found in any other single formulation of QM. One of these advantages is its versatility; as discussed above, the RHF is a sort of unification of many other standard formulations of QM, such as the consistent histories formulation, the MWI, Einstein’s statistical interpretation, and the FPI. In addition, the RHF is possessed of the following interpretational features: (1) a clear definition of the observer and of its space of states, and a lack of the fundamental split between observer and system that is characteristic of the Copenhagen Interpretation; (2) movement of the observer through state space that obeys classical notions of locality and probability; (3) a derivation of quantum statistics and quantum nonlocality from classical notions of probability and locality; (4) a demonstration of compatibility with general relativity; (5) an adherence to the notion that “all is geometry;” (6) the absence of a requirement for any extra “unphysical” dimensions of spacetime beyond the four of our everyday experience; and (7) a spacetime structure that is causal on the large scale. In addition, the RHF provides a simple conceptualization of the EPR argument that QM cannot be considered both local and complete. In exchange for these interpretational features, we are forced merely to accept that the structure of spacetime is acausal on the small scale.

    As of January 2005, the RHF is summarized in a series of three papers that are available for download from the following site:
    RHF papers.

    These papers have been formatted using LaTeX to make them as reader-friendly as possible. There are three parts. Part I – Basics.pdf presents an introduction to the basic mathematical elements of the RHF, including the 4-geon model of fundamental particles, and demonstrates in explicit terms that the RHF makes the same predictions as the Feynman path integral (FPI) approach. Since the FPI is an independent formulation of quantum mechanics — for example, the FPI is well-known to provide a basis for the derivation of the Schrodinger equation — the equivalence between the RHF and the FPI implies equivalence between the RHF and quantum mechanics in general. Part II – Interpretation.pdf provides an overview of the interpretational issues surrounding the RHF, as summarized above. Part III – Further Mathematical Development.pdf is not (as of January 2005) yet ready for download and review, as I anticipate that it will undergo significant revision as I learn more about Morse theory and its application to the RHF.

    I am currently (as of January 2005) in the process of soliciting an informal “peer-review” of the RHF prior to any attempt at submission for publication, even to arXiv. I anticipate this to be a slow process: the complexity of the RHF technique is about on a par as, say, the Feynman path integral (FPI) technique itself. Furthermore, an understanding of the RHF, especially its interpretational implications, requires a broad knowledge of the foundations of QM — especially regarding the inner workings of the FPI — that even many practicing physicists lack. I welcome any comments, be they from an expert or a layman, which should be sent to my yahoo! email address (straycat_md).

    For an online discussion of the Relative Histories Formulation, check out my Yahoo! group: QM_from_GR (http://groups.yahoo.com/group/QM_from_GR)

    03/01/2010 Posted by | science | , , | Leave a comment

    Clearest sign yet of dark matter detected

    New Scientist Revue, 18 December 2009    by Anil Ananthaswamy

    http://www.newscientist.com/article/dn18303-clearest-sign-yet-of-dark-matter-detected.html

    Deep inside an abandoned iron mine in northern Minnesota, physicists may have spotted the clearest signal yet of dark matter, the mysterious stuff that is thought to make up 90 per cent of the mass of the universe.

    The Cryogenic Dark Matter Search (CDMS) collaboration has announced that its experiment has seen tantalising glimpses of what could be dark matter.

    The CDMS-II experiment operates nearly three-quarters of a kilometre underground in the Soudan mine. It is looking for so-called weakly interacting massive particles (WIMPs), which are thought to make up dark matter.

    The experiment consists of five stacks of detectors. Each stack contains six ultra-pure crystals of germanium or silicon at a temperature of 40 millikelvin, a touch above absolute zero. These are designed to detect dark matter particles by looking at the energy released when a particle smashes into a nucleus of germanium or silicon.

    The problem is that many other particles – including cosmic rays and those emitted by the radioactivity of surrounding rock – can create signals in the detector that look like dark matter. So the experiment has been carefully designed to shield the crystals from such background “noise”. The idea is that when the detector works for a long time without seeing any background particles, then if it does see something, it’s most likely to be a dark matter particle.

    Signal or noise?

    When the CDMS-II team looked at the analysis of their latest run – after accounting for all possible background particles and any faulty detectors in their stacks – they were in for a surprise. Their statistical models predicted that they would see 0.8 events during a run between 2007 and 2008, but instead they saw two.

    The team is not claiming discovery of dark matter, because the result is not statistically significant. There is a 1-in-4 chance that it is merely due to fluctuations in the background noise. Had the experiment seen five events above the expected background, the claim for having detected dark matter would have been a lot stronger.

    Nonetheless, the team cannot dismiss the possibility that the two events are because of dark matter. The two events have characteristics consistent with those expected from WIMPs (PDF).

    The CDMS-II team is planning to refine the analysis of their data in the next few months. In addition, they have begun building new detectors in the mine, which will be three times as sensitive as the existing setup. These “SuperCDMS” detectors are expected be in place by middle of next year.

    Signs from space

    Despite the reservations, there is a palpable sense that an incontrovertible detection of dark matter is imminent. Space-based telescopes like PAMELA have seen particles that could be coming from the annihilation of dark matter in our galaxy. Similar sightings have been made by a balloon-based experiment called ATIC. Soon, the Large Hadron Collider will be starting to smash protons together in the hopes of creating dark matter.

    Dan Tovey at the University of Sheffield, UK, who works on the LHC’s ATLAS detector, says that while the CDMS results are not statistically significant, they are bound to generate excitement at the LHC. “I’m sure that people will be looking at [these results] with a lot of interest,” he says.

    He points out that even if direct detection experiments like CDMS find evidence of dark matter, the LHC will have to create them in order for us to understand the underlying physics. For instance, the theory of supersymmetry predicts a kind of dark matter that will be the target of searches at the LHC.

    “The really exciting aspect of all this is that if you see a signal in a direct-detection dark matter experiment and a signal for supersymmetry at the LHC, you can compare those two observations and investigate whether they are compatible with each other,” says Tovey.

    21/12/2009 Posted by | science | Leave a comment

    In SUSY we trust: What the LHC is really looking for

        http://www.newscientist.com/article/mg20427341.200-in-susy-we-trust-what-the-lhc-is-really-looking-for.html

    AS DAMP squibs go, it was quite a spectacular one. Amid great pomp and ceremony – not to mention dark offstage rumblings that the end of the world was nigh – the Large Hadron Collider (LHC), the world’s mightiest particle smasher, fired up in September last year. Nine days later a short circuit and a catastrophic leak of liquid helium ignominiously shut the machine down.

    Now for take two. Any day now, if all goes to plan, proton beams will start racing all the way round the ring deep beneath CERN, the LHC’s home on the outskirts of Geneva, Switzerland.

    Nobel laureate Steven Weinberg is worried. It’s not that he thinks the LHC will create a black hole that will engulf the planet, or even that the restart will end in a technical debacle like last year’s. No: he’s actually worried that the LHC will find what some call the “God particle”, the popular and embarrassingly grandiose moniker for the hitherto undetected Higgs boson.

    “I’m terrified,” he says. “Discovering just the Higgs would really be a crisis.”

    Why so? Evidence for the Higgs would be the capstone of an edifice that particle physicists have been building for half a century – the phenomenally successful theory known simply as the standard model. It describes all known particles, as well as three of the four forces that act on them: electromagnetism and the weak and strong nuclear forces.

    It is also manifestly incomplete. We know from what the theory doesn’t explain that it must be just part of something much bigger. So if the LHC finds the Higgs and nothing but the Higgs, the standard model will be sewn up. But then particle physics will be at a dead end, with no clues where to turn next.

    Hence Weinberg’s fears. However, if the theorists are right, before it ever finds the Higgs, the LHC will see the first outline of something far bigger: the grand, overarching theory known as supersymmetry. SUSY, as it is endearingly called, is a daring theory that doubles the number of particles needed to explain the world. And it could be just what particle physicists need to set them on the path to fresh enlightenment.

    So what’s so wrong with the standard model? First off, there are some obvious sins of omission. It has nothing whatsoever to say about the fourth fundamental force of nature, gravity, and it is also silent on the nature of dark matter. Dark matter is no trivial matter: if our interpretation of certain astronomical observations is correct, the stuff outweighs conventional matter in the cosmos by more than 4 to 1.

    Ironically enough, though, the real trouble begins with the Higgs. The Higgs came about to solve a truly massive problem: the fact that the basic building blocks of ordinary matter (things such as electrons and quarks, collectively known as fermions) and the particles that carry forces (collectively called bosons) all have a property we call mass. Theories could see no rhyme or reason in particles’ masses and could not predict them; they had to be measured in experiments and added into the theory by hand.

    These “free parameters” were embarrassing loose threads in the theories that were being woven together to form what eventually became the standard model. In 1964,Peter Higgs of the University of Edinburgh, UK, and François Englert and Robert Brout of the Free University of Brussels (ULB) in Belgium independently hit upon a way to tie them up.

    That mechanism was an unseen quantum field that suffuses the entire cosmos. Later dubbed the Higgs field, it imparts mass to all particles. The mass an elementary particle such as an electron or quark acquires depends on the strength of its interactions with the Higgs field, whose “quanta” are Higgs bosons.

    Fields like this are key to the standard model as they describe how the electromagnetic and the weak and strong nuclear forces act on particles through the exchange of various bosons – the W and Z particles, gluons and photons. But the Higgs theory, though elegant, comes with a nasty sting in its tail: what is the mass of the Higgs itself? It should consist of a core mass plus contributions from its interactions with all the other elementary particles. When you tot up those contributions, the Higgs mass balloons out of control.

    The experimental clues we already have suggest that the Higgs’s mass should lie somewhere between 114 and 180 gigaelectronvolts – between 120 and 190 times the mass of a proton or neutron, and easily the sort of energy the LHC can reach. Theory, however, comes up with values 17 or 18 orders of magnitude greater – a catastrophic discrepancy dubbed “the hierarchy problem”. The only way to get rid of it in the standard model is to fine-tune certain parameters with an accuracy of 1 part in 1034, something that physicists find unnatural and abhorrent.

    Three into one

    The hierarchy problem is not the only defect in the standard model. There is also the problem of how to reunite all the forces. In today’s universe, the three forces dealt with by the standard model have very different strengths and ranges. At a subatomic level, the strong force is the strongest, the weak the weakest and the electromagnetic force somewhere in between.

    Towards the end of the 1960s, though, Weinberg, then at Harvard University, showed with Abdus Salam and Sheldon Glashow that this hadn’t always been the case. At the kind of high energies prevalent in the early universe, the weak and electromagnetic forces have one and the same strength; in fact they unify into one force. The expectation was that if you extrapolated back far enough towards the big bang, the strong force would also succumb, and be unified with the electromagnetic and weak force in one single super-force (see graph).

    In 1974 Weinberg and his colleagues Helen Quinn and Howard Georgi showed that the standard model could indeed make that happen – but only approximately. Hailed initially as a great success, this not-so-exact reunification soon began to bug physicists working on “grand unified theories” of nature’s interactions.

    It was around this time that supersymmetry made its appearance, debuting in the work of Soviet physicists Yuri Golfand and Evgeny Likhtman that never quite made it to the west. It was left to Julius Wess of Karlsruhe University in Germany and Bruno Zumino of the University of California, Berkeley, to bring its radical prescriptions to wider attention a few years later.

    Wess and Zumino were trying to apply physicists’ favourite simplifying principle, symmetry, to the zoo of subatomic particles. Their aim was to show that the division of the particle domain into fermions and bosons is the result of a lost symmetry that existed in the early universe.

    According to supersymmetry, each fermion is paired with a more massive supersymmetric boson, and each boson with a fermionic super-sibling. For example, the electron has the selectron (a boson) as its supersymmetric partner, while the photon is partnered with the photino (a fermion). In essence, the particles we know now are merely the runts of a litter double the size (see diagram).

    The key to the theory is that in the high-energy soup of the early universe, particles and their super-partners were indistinguishable. Each pair co-existed as single massless entities. As the universe expanded and cooled, though, this supersymmetry broke down. Partners and super-partners went their separate ways, becoming individual particles with a distinctive mass all their own.

    Supersymmetry was a bold idea, but one with seemingly little to commend it other than its appeal to the symmetry fetishists. Until, that is, you apply it to the hierarchy problem. It turned out that supersymmetry could tame all the pesky contributions from the Higgs’s interactions with elementary particles, the ones that cause its mass to run out of control. They are simply cancelled out by contributions from their supersymmetric partners. “Supersymmetry makes the cancellation very natural,” says Nathan Seiberg of the Institute of Advanced Studies, Princeton.

    That wasn’t all. In 1981 Georgi, together with Savas Dimopoulos of Stanford University, redid the force reunification calculations that he had done with Weinberg and Quinn, but with supersymmetry added to the mix. They found that the curves representing the strengths of all three forces could be made to come together with stunning accuracy in the early universe. “If you have two curves, it’s not surprising that they intersect somewhere,” says Weinberg. “But if you have three curves that intersect at the same point, then that’s not trivial.”

    This second strike for supersymmetry was enough to convert many physicists into true believers. But it was when they began studying some of the questions raised by the new theory that things became really interesting.

    One pressing question concerned the present-day whereabouts of supersymmetric particles. Electrons, photons and the like are all around us, but of selectrons and photinos there is no sign, either in nature or in any high-energy accelerator experiments so far. If such particles exist, they must be extremely massive indeed, requiring huge amounts of energy to fabricate.

    Such huge particles would long since have decayed into a residue of the lightest, stable supersymmetric particles, dubbed neutralinos. Still massive, the neutralino has no electric charge and interacts with normal matter extremely timorously by means of the weak nuclear force. No surprise then that it is has eluded detection so far.

    When physicists calculated exactly how much of the neutralino residue there should be, they were taken aback. It was a huge amount – far more than all the normal matter in the universe.

    Beginning to sound familiar? Yes, indeed: it seemed that neutralinos fulfilled all the requirements for the dark matter that astronomical observations persuade us must dominate the cosmos. A third strike for supersymmetry.

    Each of the three questions that supersymmetry purports to solve – the hierarchy problem, the reunification problem and the dark-matter problem – might have its own unique answer. But physicists are always inclined to favour an all-purpose theory if they can find one. “It’s really reassuring that there is one idea that solves these three logically independent things,” says Seiberg.

    Supersymmetry solves problems with the standard model, helps to unify nature’s forces and explains the origin of dark matter

    Supersymmetry’s scope does not end there. As Seiberg and his Princeton colleague Edward Witten have shown, the theory can also explain why quarks are never seen on their own, but are always corralled together by the strong force into larger particles such as protons and neutrons. In the standard model, there is no mathematical indication why that should be; with supersymmetry, it drops out of the equations naturally. Similarly, mathematics derived from supersymmetry can tell you how many ways can you fold a four-dimensional surface, an otherwise intractable problem in topology.

    All this seems to point to some fundamental truth locked up within the theory. “When something has applications beyond those that you designed it for, then you say, ‘well this looks deep’,” says Seiberg. “The beauty of supersymmetry is really overwhelming.”

    Sadly, neither mathematical beauty nor promise are enough on their own. You also need experimental evidence. “It is embarrassing,” says Michael Dine of the University of California, Santa Cruz. “It is a lot of paper expended on something that is holding on by these threads.”

    Circumstantial evidence for supersymmetry might be found in various experiments designed to find and characterise dark matter in cosmic rays passing through Earth. These include the Cryogenic Dark Matter Search experiment inside the Soudan Mine in northern Minnesota and the Xenon experiment beneath the Gran Sasso mountain in central Italy. Space probes like NASA’s Fermi satellite are also scouring the Milky Way for the telltale signs expected to be produced when two neutralinos meet and annihilate.

    The best proof would come, however, if we could produce neutralinos directly through collisions in an accelerator. The trouble is that we are not entirely sure how muscular that accelerator would need to be. The mass of the super-partners depends on precisely when supersymmetry broke apart as the universe cooled and the standard particles and their super-partners parted company. Various versions of the theory have not come up with a consistent timing. Some variants even suggest that certain super-partners are light enough to have already turned up in accelerators such as the Large Electron-Positron collider – the LHC’s predecessor at CERN – or the Tevatron collider in Batavia, Illinois. Yet neither accelerator found anything.

    The reason physicists are so excited about the LHC, though, is that the kind of supersymmetry that best solves the hierarchy problem will become visible at the higher energies the LHC will explore. Similarly, if neutralinos have the right mass to make up dark matter, they should be produced in great numbers at the LHC.

    Since the accident during the accelerator’s commissioning last year, CERN has adopted a softly-softly approach to the LHC’s restart. For the first year it will smash together two beams of protons with a total energy of 7 teraelectronvolts (TeV), half its design energy. Even that is quite a step up from the 1.96 TeV that the Tevatron, the previous record holder, could manage. “If the heaviest supersymmetric particles weigh less than a teraelectronvolt, then they could be produced quite copiously in the early stages of LHC’s running,” says CERN theorist John Ellis.

    If that is so, events after the accelerator is fired up again could take a paradoxical turn. The protons that the LHC smashes together are composite particles made up of quarks and gluons, and produce extremely messy debris. It could take rather a long time to dig the Higgs out of the rubble, says Ellis.

    Any supersymmetric particles, on the other hand, will decay in as little as 10-16seconds into a slew of secondary particles, culminating in a cascade of neutralinos. Because neutralinos barely interact with other particles, they will evade the LHC’s detectors. Paradoxically, this may make them relatively easy to find as the energy and momentum they carry will appear to be missing. “This, in principle, is something quite distinctive,” says Ellis.

    So if evidence for supersymmetry does exist in the form most theorists expect, it could be discovered well before the Higgs particle, whose problems SUSY purports to solve. Any sighting of something that looks like a neutralino would be very big news indeed. At the very least it would be the best sighting yet of a dark-matter particle. Even better, it would tell us that nature is fundamentally supersymmetric.

    There is a palpable sense of excitement about what the LHC might find in the coming years. “I’ll be delighted if it is supersymmetry,” says Seiberg. “But I’ll also be delighted if it is something else. We need more clues from nature. The LHC will give us these clues.”

    Blood brothers?

    String theory and supersymmetry are two as-yet unproved theories about the make-up of the universe. But they are not necessarily related.

    It is true that most popular variants of string theory take a supersymmetric universe as their starting point. String theorists, who have taken considerable flak for advocating a theory that has consistently struggled to make testable predictions, will breathe a huge sigh of relief if supersymmetry is found.

    That might be premature: the universe could still be supersymmetric without string theory being correct. Conversely, at the kind of energies probed by the LHC, it is not clear that supersymmetry is a precondition for string theory. “It is easier to understand string theory if there is supersymmetry at the LHC,” says Edward Witten, a theorist at the Institute of Advanced Studies, Princeton, “but it is not clear that it is a logical requirement.”

    If supersymmetry does smooth the way for string theory, however, that could be a decisive step towards a theory that solves the greatest unsolved problem of physics: why gravity seems so different to all the rest of the forces in nature. If so, supersymmetry really could have all the answers.

    21/12/2009 Posted by | science | 3 Comments

    Hyperdrive Propulsion could be tested at the Large Hadron Collidep

    http://www.technologyreview.com/blog/arxiv/24211/

    Thursday, October 08, 2009

    The principle behind a novel form of spacecraft propulsion could be tested at the world’s most powerful particle accelerator.

    In 1924, the influential German mathematician David Hilbert published a paper called “The Foundations of Physics,” in which he outlined an extraordinary side effect of Einstein’s theory of relativity.

    Hilbert was studying the interaction between a relativistic particle moving toward or away from a stationary mass. His conclusion was that if the relativistic particle had a velocity greater than about half the speed of light, a stationary mass should repel it. At least, that’s how it would appear to a distant inertial observer.

    That’s an interesting result, and one that has been more or less forgotten, says Franklin Felber, an independent physicist based in the United States. (Hilbert’s paper was written in German.)

    Felber has turned this idea on its head, predicting that a relativistic particle should also repel a stationary mass. He says that this effect could be exploited to propel an initially stationary mass to a good fraction of the speed of light.

    The basis for Felber’s “hypervelocity propulsion” drive is that the repulsive effect allows a relativistic particle to deliver a specific impulse that is greater than its specific momentum, thereby achieving speeds greater than the driving particle’s speed. He says this is analogous to the elastic collision of a heavy mass with a much lighter, stationary mass, from which the lighter mass rebounds with about twice the speed of the heavy mass.

    What’s more, Felber predicts that this speed can be achieved without generating the severe stresses that could damage a space vehicle or its occupants. That’s because the spacecraft follows a geodetic trajectory, in which the only stresses arise from tidal forces (although it’s not clear why those forces wouldn’t be substantial).

    That’s a neat idea, but little better than science fiction, were it not for one further corollary: Felber is proposing an experiment that could prove his ideas or damn them.

    It turns out that when it is up and running, the Large Hadron Collider (LHC) will accelerate particles to the kind of energies that generate this repulsive force. Felber’s idea is to set up a test mass next to the beam line and measure the forces on it as the particles whiz past.

    The repulsive force that Felber predicts will be tiny, but it could be detected using resonant test mass. And since the experiment wouldn’t interfere with the LHC’s main business of colliding particles, it could be run in conjunction with it.

    While the huge energy of the LHC makes it first choice for such an experiment, Felber says the effect could also be seen at Fermilab’s Tevatron, albeit with a signal strength that would be three orders of magnitude smaller.

    Perhaps that’s something to consider as a last hurrah for the old Tevatron, before they begin mothballing it sometime next year.

    Ref: arxiv.org/abs/0910.1084: Test of Relativistic Gravity for Propulsion at the Large Hadron Collider

    Comments

    [no subject]

    A colleague of mine asked if I thought this was possible or hokum. The authors own “paper” (unpublished preprint, linked above) contains a rather lot of self-references to other unpublished preprints, usually a sign of some level of crack-pottedness. Also, his own numbers in the abstract for this idea (an acceleration of 3 nm/s^2 for 2 ns) make this completely unworkable. That corresponds to a displacement of a test mass of 1.5 x 10^-35 m. The most sensitive displacement detectors are the laser gravitational wave observatories, each of which are a pair of perpendicular 10km Fabry-Perot cavities. These detectors have a sensitivity of about 10^-18 m. That’s seventeen orders of magnitude difference. On an amusing note, that displacement is actually the same order of magnitude as the “Planck length”. I can’t help but wonder whether the author engaged in some silly numerology in order to get it to work out that way.

    Rate this comment: 12345

    sandratycova
    10/09/2009
    Posts:4

    Avg Rating:

    3/5
    • Re:

      could not mail the author- –
      but if the influence of a passing proton beam would be measurable than i beleive the trajectory of the beam would be changed also this way perhaps more than allowed for the on going experiments

      Rate this comment: 12345

      emilius
      10/09/2009
      Posts:4

      Avg Rating:

      3/5
    None

    The “repulsive force” he attributes to Hilbert is simply a coordinate artifact of the Schwarzschild metric, related to the gravitation redshift as you approach a gravitating body.  There is a good reason Hilbert’s paper was forgotten, and that this guy’s papers haven’t been published.

    Rate this comment: 12345

    fadude
    10/09/2009
    Posts:1

    Avg Rating:

    5/5

    12/10/2009 Posted by | science | Leave a comment

    BOSS: Dark Energy and the Geometry of Space

    The SDSS-III’s Baryon Oscillation Spectroscopic Survey (BOSS) will map the spatial distribution of luminous galaxies and quasars to detect the characteristic scale imprinted by baryon acoustic oscillations in the early universe. Sound waves that propagate in the early universe, like spreading ripples in a pond, imprint a characteristic scale on cosmic microwave background fluctuations. These fluctuations have evolved into today’s walls and voids of galaxies, meaning this baryon acoustic oscillation scale is visible among galaxies today.

    The distribution of galaxies in a slice of the universe. A red      circle shows the characteristic scale of baryon acoustic oscillations.
    A map of luminous red galaxies, as seen by the SDSS. The large red circle shows the characteristic scale of baryon acoustic oscillations.

    Using the acoustic scale as a physically calibrated ruler, BOSS will determine the angular diameter distance with a precision of 1% at redshifts z = 0.3 and z = 0.6 and 1.5% at z = 2.5, and it will measure the cosmic expansion rate H(z) with 1-2% precision at the same redshifts. These measurements will provide demanding tests for theories of dark energy and the origin of cosmic acceleration.

    For a detailed description of BOSS, see Section 3 of the Project Description, available as a PDF document.

    A graph of epsilon(s) as a function of comoving      separation. There is a bump at about comoving separation 100*h^-1.
    The BAO scale can be seen as a bump in the correlation function of luminous galaxies
    (inset: close-up of bump).
    BOSS at a glance
    • Dark time observations
    • Fall 2009 – Spring 2014
    • 1,000-fiber spectrograph, resolution R~2000
    • wavelengths 360-1000 nm
    • 10,000 square degrees
    • Redshifts of 1.5 million luminous galaxies to z = 0.7
    • Lyman-α forest spectra of 160,000 quasars at redshifts 2.2 < z < 3

    http://www.sdss3.org/cosmology.php

    06/10/2009 Posted by | science | Leave a comment

    20 Things You Didn’t Know About… Time

    http://discovermagazine.com/2009/mar/20-things-you-didn.t-know-about-time

    The beginning, the end, and the funny habits of our favorite ticking force.                      by LeeAundra Temescu

    From the March 2009 issue, published online March 12, 2009

    1 “Time is an illusion. Lunchtime doubly so,” joked Douglas Adams in The Hitchhiker’s Guide to the Galaxy. Scientists aren’t laughing, though. Some speculative new physics theories suggest that time emerges from a more fundamental—and timeless—reality.

    2 Try explaining that when you get to work late. The average U.S. city commuter loses 38 hours a year to traffic delays.

    3 Wonder why you have to set your clock ahead in March? Daylight Saving Time began as a joke by Benjamin Franklin, who proposed waking people earlier on bright summer mornings so they might work more during the day and thus save candles. It was introduced in the U.K. in 1917 and then spread around the world.

    4 Green days. The Department of Energy estimates that electricity demand drops by 0.5 percent during Daylight Saving Time, saving the equivalent of nearly 3 million barrels of oil.

    5 By observing how quickly bank tellers made change, pedestrians walked, and postal clerks spoke, psychologists determined that the three fastest-paced U.S. cities are Boston, Buffalo, and New York.

    6 The three slowest? Shreveport, Sacramento, and L.A.

    7 One second used to be defined as 1/86,400 the length of a day. However, Earth’s rotation isn’t perfectly reliable. Tidal friction from the sun and moon slows our planet and increases the length of a day by 3 milli­seconds per century.

    8 This means that in the time of the dinosaurs, the day was just 23 hours long.

    9 Weather also changes the day. During El Niño events, strong winds can slow Earth’s rotation by a fraction of a milli­second every 24 hours.

    10 Modern technology can do better. In 1972 a network of atomic clocks in more than 50 countries was made the final authority on time, so accurate that it takes 31.7 million years to lose about one second.

    11 To keep this time in sync with Earth’s slowing rotation, a “leap second” must be added every few years, most recently this past New Year’s Eve.

    12 The world’s most accurate clock, at the National Institute of Standards and Technology in Colorado, measures vibrations of a single atom of mercury. In a billion years it will not lose one second.

    13 Until the 1800s, every village lived in its own little time zone, with clocks synchronized to the local solar noon.

    14 This caused havoc with the advent of trains and timetables. For a while watches were made that could tell both local time and “railway time.”

    15 On November 18, 1883, American railway companies forced the national adoption of standardized time zones.

    16 Thinking about how railway time required clocks in different places to be synchronized may have inspired Einstein to develop his theory of relativity, which unifies space and time.

    17 Einstein showed that gravity makes time run more slowly. Thus airplane passengers, flying where Earth’s pull is weaker, age a few extra nano­seconds each flight.

    18 According to quantum theory, the shortest moment of time that can exist is known as Planck time, or 0.0000000000000000000000000000000000000000001 second.

    19 Time has not been around forever. Most scientists believe it was created along with the rest of the universe in the Big Bang, 13.7 billion years ago.

    20 There may be an end of time. Three Spanish scientists posit that the observed acceleration of the expanding cosmos is an illusion caused by the slowing of time. According to their math, time may eventually stop, at which point everything will come to a standstill.

    17/09/2009 Posted by | science | , , | 2 Comments

    The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

    Stem-cell guru Robert Lanza presents a radical new view of the universe and everything in it.
    by Robert Lanza and Bob Berman

    http://discovermagazine.com/2009/may/01-the-biocentric-universe-life-creates-time-space-cosmos

    From the May 2009 issue, published online May 1, 2009

    Adapted from Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, by Robert Lanza with Bob Berman, published by BenBella Books in May 2009.

    The farther we peer into space, the more we realize that the nature of the universe cannot be understood fully by inspecting spiral galaxies or watching distant supernovas. It lies deeper. It involves our very selves.

    This insight snapped into focus one day while one of us (Lanza) was walking through the woods. Looking up, he saw a huge golden orb web spider tethered to the overhead boughs. There the creature sat on a single thread, reaching out across its web to detect the vibrations of a trapped insect struggling to escape. The spider surveyed its universe, but everything beyond that gossamer pinwheel was incomprehensible. The human observer seemed as far-off to the spider as telescopic objects seem to us. Yet there was something kindred: We humans, too, lie at the heart of a great web of space and time whose threads are connected according to laws that dwell in our minds.

    Is the web possible without the spider? Are space and time physical objects that would continue to exist even if living creatures were removed from the scene?

    Figuring out the nature of the real world has obsessed scientists and philosophers for millennia. Three hundred years ago, the Irish empiricist George Berkeley contributed a particularly prescient observation: The only thing we can perceive are our perceptions. In other words, consciousness is the matrix upon which the cosmos is apprehended. Color, sound, temperature, and the like exist only as perceptions in our head, not as absolute essences. In the broadest sense, we cannot be sure of an outside universe at all.

    For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

    Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

    MESSING WITH THE LIGHT
    Quantum mechanics is the physicist’s most accurate model for describing the world of the atom. But it also makes some of the most persuasive arguments that conscious perception is integral to the workings of the universe. Quantum theory tells us that an unobserved small object (for instance, an electron or a photon—a particle of light) exists only in a blurry, unpredictable state, with no well-defined location or motion until the moment it is observed. This is Werner Heisenberg’s famous uncertainty principle. Physicists describe the phantom, not-yet-manifest condition as a wave function, a mathematical expression used to find the probability that a particle will appear in any given place. When a property of an electron suddenly switches from possibility to reality, some physicists say its wave function has collapsed.

    What accomplishes this collapse? Messing with it. Hitting it with a bit of light in order to take its picture. Just looking at it does the job. Experiments suggest that mere knowledge in the experimenter’s mind is sufficient to collapse a wave function and convert possibility to reality. When particles are created as a pair—for instance, two electrons in a single atom that move or spin together—physicists call them entangled. Due to their intimate connection, entangled particles share a wave function. When we measure one particle and thus collapse its wave function, the other particle’s wave function instantaneously collapses too. If one photon is observed to have a vertical polarization (its waves all moving in one plane), the act of observation causes the other to instantly go from being an indefinite probability wave to an actual photon with the opposite, horizontal polarity—even if the two photons have since moved far from each other.

    In 1997 University of Geneva physicist Nicolas Gisin sent two entangled photons zooming along optical fibers until they were seven miles apart. One photon then hit a two-way mirror where it had a choice: either bounce off or go through. Detectors recorded what it randomly did. But whatever action it took, its entangled twin always performed the complementary action. The communication between the two happened at least 10,000 times faster than the speed of light. It seems that quantum news travels instantaneously, limited by no external constraints—not even the speed of light. Since then, other researchers have duplicated and refined Gisin’s work. Today no one questions the immediate nature of this connectedness between bits of light or matter, or even entire clusters of atoms.

    Before these experiments most physicists believed in an objective, independent universe. They still clung to the assumption that physical states exist in some absolute sense before they are measured.

    All of this is now gone for keeps.

    WRESTLING WITH GOLDILOCKS
    The strangeness of quantum reality is far from the only argument against the old model of reality. There is also the matter of the fine-tuning of the cosmos. Many fundamental traits, forces, and physical constants—like the charge of the electron or the strength of gravity—make it appear as if everything about the physical state of the universe were tailor-made for life. Some researchers call this revelation the Goldilocks principle, because the cosmos is not “too this” or “too that” but rather “just right” for life.

    At the moment there are only four explanations for this mystery. The first two give us little to work with from a scientific perspective. One is simply to argue for incredible coincidence. Another is to say, “God did it,” which explains nothing even if it is true.

    The third explanation invokes a concept called the anthropic principle,? first articulated by Cambridge astrophysicist Brandon Carter in 1973. This principle holds that we must find the right conditions for life in our universe, because if such life did not exist, we would not be here to find those conditions. Some cosmologists have tried to wed the anthropic principle with the recent theories that suggest our universe is just one of a vast multitude of universes, each with its own physical laws. Through sheer numbers, then, it would not be surprising that one of these universes would have the right qualities for life. But so far there is no direct evidence whatsoever for other universes.

    The final option is biocentrism, which holds that the universe is created by life and not the other way around. This is an explanation for and extension of the participatory anthropic principle described by the physicist John Wheeler, a disciple of Einstein’s who coined the terms wormhole and black hole.

    SEEKING SPACE AND TIME
    Even the most fundamental elements of physical reality, space and time, strongly support a biocentric basis for the cosmos.

    According to biocentrism, time does not exist independently of the life that notices it. The reality of time has long been questioned by an odd alliance of philosophers and physicists. The former argue that the past exists only as ideas in the mind, which themselves are neuroelectrical events occurring strictly in the present moment. Physicists, for their part, note that all of their working models, from Isaac Newton’s laws through quantum mechanics, do not actually describe the nature of time. The real point is that no actual entity of time is needed, nor does it play a role in any of their equations. When they speak of time, they inevitably describe it in terms of change. But change is not the same thing as time.

    To measure anything’s position precisely, at any given instant, is to lock in on one static frame of its motion, as in the frame of a film. Conversely, as soon as you observe a movement, you cannot isolate a frame, because motion is the summation of many frames. Sharpness in one parameter induces blurriness in the other. Imagine that you are watching a film of an archery tournament. An archer shoots and the arrow flies. The camera follows the arrow’s trajectory from the archer’s bow toward the target. Suddenly the projector stops on a single frame of a stilled arrow. You stare at the image of an arrow in midflight. The pause in the film enables you to know the position of the arrow with great accuracy, but you have lost all information about its momentum. In that frame it is going nowhere; its path and velocity are no longer known. Such fuzziness brings us back to Heisenberg’s uncertainty principle, which describes how measuring the location of a subatomic particle inherently blurs its momentum and vice versa.

    All of this makes perfect sense from a biocentric perspective. Everything we perceive is actively and repeatedly being reconstructed inside our heads in an organized whirl of information. Time in this sense can be defined as the summation of spatial states occurring inside the mind. So what is real? If the next mental image is different from the last, then it is different, period. We can award that change with the word time, but that does not mean there is an actual invisible matrix in which changes occur. That is just our own way of making sense of things. We watch our loved ones age and die and assume that an external entity called time is responsible for the crime.

    There is a peculiar intangibility to space, as well. We cannot pick it up and bring it to the laboratory. Like time, space is neither physical nor fundamentally real in our view. Rather, it is a mode of interpretation and understanding. It is part of an animal’s mental software that molds sensations into multidimensional objects.

    Most of us still think like Newton, regarding space as sort of a vast container that has no walls. But our notion of space is false. Shall we count the ways? 1. Distances between objects mutate depending on conditions like gravity and velocity, as described by Einstein’s relativity, so that there is no absolute distance between anything and anything else. 2. Empty space, as described by quantum mechanics, is in fact not empty but full of potential particles and fields. 3. Quantum theory even casts doubt on the notion that distant objects are truly separated, since entangled particles can act in unison even if separated by the width of a galaxy.

    UNLOCKING THE CAGE
    In daily life, space and time are harmless illusions. A problem arises only because, by treating these as fundamental and independent things, science picks a completely wrong starting point for investigations into the nature of reality. Most researchers still believe they can build from one side of nature, the physical, without the other side, the living. By inclination and training these scientists are obsessed with mathematical descriptions of the world. If only, after leaving work, they would look out with equal seriousness over a pond and watch the schools of minnows rise to the surface. The fish, the ducks, and the cormorants, paddling out beyond the pads and the cattails, are all part of the greater answer.

    Recent quantum studies help illustrate what a new biocentric science would look like. Just months? ago, Nicolas Gisin announced a new twist on his entanglement experiment; in this case, he thinks the results could be visible to the naked eye. At the University of Vienna, Anton Zeilinger’s work with huge molecules called buckyballs pushes quantum reality closer to the macroscopic world. In an exciting extension of this work—proposed by Roger Penrose, the renowned Oxford physicist—not just light but a small mirror that reflects it becomes part of an entangled quantum system, one that is billions of times larger than a buckyball. If the proposed experiment ends up confirming Penrose’s idea, it would also confirm that quantum effects apply to human-scale objects.

    Biocentrism should unlock the cages in which Western science has unwittingly confined itself. Allowing the observer into the equation should open new approaches to understanding cognition, from unraveling the nature of consciousness to developing thinking machines that experience the world the same way we do. Biocentrism should also provide stronger bases for solving problems associated with quantum physics and the Big Bang. Accepting space and time as forms of animal sense perception (that is, as biological), rather than as external physical objects, offers a new way of understanding everything from the microworld (for instance, the reason for strange results in the two-slit experiment) to the forces, constants, and laws that shape the universe. At a minimum, it should help halt such dead-end efforts as string theory.

    Above all, biocentrism offers a more promising way to bring together all of physics, as scientists have been trying to do since Einstein’s unsuccessful unified field theories of eight decades ago. Until we recognize the essential role of biology, our attempts to truly unify the universe will remain a train to nowhere.

    Adapted from Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, by Robert Lanza with Bob Berman, published by BenBella Books in May 2009.

    17/09/2009 Posted by | science | | 2 Comments

    Quantum Mind

    “The consciousness poses the most baffling problems in the science of mind. There is nothing that we know more intimately than conscious experience, but there is noting that is harder to explain.”

    David Chalmers

    Within classical nero physiology scheme we can study neurons and electrical transfer of sensory or motor pulses. But we cannot describe the origin of perception of sensory pulses or the act of decision making that is the origin for motor responses. The higher brain functions cannot be studied at cellular or even molecular level. It seems that conscious awareness is arising not from classical level of brain function but from anther paradigm. That is why classical nero-anatomy and nero-physiology that are dwelling at cellular or molecular levels, cannot offer an acceptable explanation for consciousness realm and they are facing a dead end in this regard.

    Many aspects of our conscious awareness is also obscure and remain as mysteries. On the other hand, quantum mechanics is not comprehensible with our classical knowledge and experience either. There have been claims that these two domains have many similarities. Attempts are being directed to explain mind within quantum mechanical context and vice versa. Quantum arena may be very strange and not comprehensible, but we have access to deeper level of our consciousness. We have or should be able to obtain be-er knowledge of this domain75. Only a bat knows what it is like to be a bat. Studying quantum physics within mind framework can be very productive. If we can comprehend quantum mechanics principles using consciousness tenets then we do not need to call quantum domain unknowable.

    Underneath I will summarize some of the similarities between quantum domain and mind realm.

    Uncertainty Principle(1)

    Werner Heisenberg showed that one cannot precisely measure both location and momentum of a particle at the sametime.The more we pinpoint the location the more we are uncertain about its momentum. This was the first step of departure from Newtonian and classical physics and entering quantum paradigm. David Bohm suggests a similar kind of uncertainty in thought process. He writes,

    If a person tries to observe what he is thinking………….he introduces unpredictable and uncontrollable changes in the way his thoughts proceed thereafter……………………If we compare the instanceous state of a thought with the position of a particle and the general direction of change of that thought with the particles momentum, we have a strong analogy.76

    We can use another analogy for uncertainty principle in min domain. If we focus on external events (like watching an attractive show) our train of thought almost stalls while we focus in watching. On the contrary, when we daydream our focus in external observations is minimal while dwelling deep in our thoughts. Therefore there is a complementarity relation between observation and train of thoughts (internal / external focus). Similar complementary relationship exists between focusing on internal sensations and external data sensed by our five senses. Presence of opposite emotions such as love and hate is other example of uncertainty relationship in the consciousness domain. We find many other complementarity relationships with our conscious awareness.

    quantum entanglement, spooky action at a distance

    Quantum entanglement is one of the main principles of quantum physics. A pair of particles, that are traveling back to back in opposite directions, are entangled even if they are worlds apart. If we reduce the superposition of one of them to one state, the other one that may be miles apart will be reduced in accordance, immediately. So far no physical connection has been found between the two particles. Particle entanglement is instant with no trace.

    Similarly, in our consciousness domain we can connect to remote locations instantly. We do not need to travel along any road to reach there. We do not have to follow any trace. There is no time passage either, just sudden envisage of remote places.
    We can envision the day before yesterday instantly, as well. We do not have to pass through yesterday to reach it. Likewise, we can predict the events in the day after tomorrow without passing through tomorrow. Entanglement in our conscious domain is instant with no trace as well.

    Besides, experiments have shown that there are instant correlations between activities of neurons even if they are located far from each other (Braitenberg 1965; Riccardi 1967) 75

    hypnosis is a transpersonal experience, where two people are connected though their psyche. Often it is thought that just the hypnotists influence its subject. experiments are done where feeding sweet to subject increased the salivation of the hypnotist. So the connection is two-way with no apparent physical link. Telepathy is also a common experience. Many distinguished psychologists believe in Transpersonal psychology. It seems entanglement without apparent physical link is a feature of psyche as well.

    Bose-Einstein Condensate

    Hot atoms are very mobile and move apart from each other. As they cool down they move slower and get closer. If we can cool them down close to absolute zero temperature they fall on top of each other and create a macroscopic lump that can even be visible under microscope.  This state of atom is called ground state. In ground state atoms loose their identity and the lump exhibits just one coherent quantum state. The lump demonstrates oneness.

    Bose-Einstein

    Condensed atoms at the bottom of magnetic bowl demonstrate oneness

    www.physics.otago.ac.nz/…/becbasics.html www.phyMsics.otaMagnetic Ballgo.ac.nz/…/becbasics.html www.physics.otago.ac.nz/..Jack Jack Dodd Center

    Oneness sensation beyond one’s ego is a common experience. Many times a group of people who are somehow connected (Family members, Fellow country men, sport team’s fans, religious groups, etc.), in high emotional state experience  the oneness as well. When we see or hear people are suffering on the other part of the world we get disturbed and feel their pain. Feeling oneness even extends to non-human living things and nonliving things just the same.
    On the other hand, it seems that long term memories condensate and being stored in a sort of ground state as well. Apparently, condensed memories are in a non-localized, correlated and homogenous state before recalling. However, simultaneous memory recall is not possible rather memories come into consciousness one by one. These behaviors are similar to quantum state in Bose-Einstein condensate and support the conviction that similar system has to be in effect at long term memory preservation.

    Wholeness

    Many characteristics of elements in subatomic scales are in indivisible coherent quantum states (like simultaneous wave-particle nature of matter). Similarly, it is assumed that all elementary particles are made of an entity called thing. Different wave frequency of thing creates different quarks. Electrons can not be distinguished from each other either. So they can not be analyzed as distinct particles. Quantum field theory also advocates a kind of wholeness and indistinguishability of the elements in the fields. So we may conclude that in quantum scale we come close to identity melt down into a whole entity.

    Similarly, if we think about any element in our thoughts deep enough we reach to a point of its indivisibility and indistinguishability with other elements of our awareness. Our awareness at deeper level demonstrate a coherent state as well. We may call this coherent state, self.

    Teleportation

    The process of sending bits of information about a particle or atom to a spatially remote place via quantum entanglement in order to rebuild the original copy is called Quantum teleportation. Interestingly, the original copy is destroyed during the process.

    Similarly, the image of an object in our brain is cut to bits and pieces of information and stored as memory. Upon recall of the object the bits of information is gathered again to recreate the image in our mind. The process of recalling associates the image to present elements so that the original memorized image is changed forever. These two processes are very similar.

    Quantum tunneling

    When a wave-particle faces a barrier, it disappears and reappears on the other side of the barrier. Quantum tunneling is another examples of similarities between quantum mechanics and consciousness course of action.

    If our train of thoughts is interrupted when it faces a barrier such as external stimuli (anything that attracts our attention) at our will,it can reappear and continue after the interruption is ceased.

    Parallel Processing
    Brain can work in many paths simultaneously. The amount of information presence and data processing in each instant is simply amazing. Brain has a massive parallel processing ability. Efforts are being made to make new computers that have the same ability. To build such computers the elements are being positioned simultaneously in superposition of many states. It is more logical and economical to assume that brain is using the same mechanism (superposition) rather than attribute the parallel processing to many multi-paths neuronal connections and activities.

    Time symmetry, second Feynman’s diagram
    In classical limits, time has just one-way trajectory that extends from past to the future. However in quantum level, time is symmetric and traveling from future to past is also a feature. In the second Feynman’s diagram of Compton scattering (photon and electron collision process), the particles scatter from each other even before they collide.

    Feynman  Diagrams

    In the Schrodinger’s cat analogy, when we open the box and collapse the superposition (facing a dead or an alive cat), a history is instantly created. This is another example of backward movement in time.

    Within our train of thoughts we are not limited to classical one way trajectory of time either. In our imaginations, we can travel to past or future at our will. Time is symmetric within our consciousness.

    Wave –particle duality

    Objects In quantum mechanics are described to have dual character. They may appear as wave or particle.  Within the wave aspect, different values of an object like its position is described with state vector.  It simply means that particle is widely spread out and its position can be best described as wave with different amplitudes at different locations in space. This cannot be appreciated within the classical terms where an object logically is positioned in one location. On the other hand, the particle nature of the object is classically comprehensible.  In classical mechanics, the object is tangible and sensibly located in a certain point in space.
    Likewise, the contents of our conscious awareness is widely spread and to certain degree chaotic. If I talk the whole contents of my mind you will doubt my sanity. When we make a statement about any topic, we logically organize our thoughts and express it in a sensible way. Out of a vast realm of unorganized and chaotic data we extract just the ones that seems logical and express it. This is similar to wave collapse and rising a classical and logical level out of un- deterministic quantum level wave function.

    Superposition of states

    Contrary to classical level of reality, in the quantum domain particles are not in a definite state. Rather they exist in superposition of every possible state. Just imagine how chaotic world is at the quantum level. The state of sum of all probable states of all particles involved is called coherent state.

    As mentioned above , deep in our thought domain we are facing with every possible and imaginable states as well. Nothing is impossible in our dreams. Our awareness also include data from unconscious plus data inherited from our ancestors back to the first protozoa and much more. Our brain processes 400,000,000,000 bits of information every second. We are just aware of about 2000 bits of it. In fact there is a chaos deep in our consciousness as well. There we find the domain of all potentialities and possibilities.

    This view of consciousness opens a door to find   intelligible interpretations for quantum mechanical paradoxes. Quantum physics is not comprehensible within classical physics principles that we are dealing with in macrocosm; however it is explicable inside our awareness realm. We have first hand and be-er knowledge about this realm. If we pass the logical/classical limits of classical physics and logical thoughts, we enter quantum physics and quantum mind. Then quantum processes is not unknowable anymore.

    Above, I have pointed to passage through the gate of logical/classical level. What does it actually mean? Underneath, I am going to elaborate more about the process.

    State Reduction

    Picture

    A solid and objective physical reality has been the basis for classical science during centuries. We see the outside world as a solid and dependable entity. The materialistic approach to science has only solidified this kind of beliefs. However, the special relativity asserts that the fundamentals of our objective world which are space, time and matter are not rigid. They are malleable and change according to each person’s frame of reference. Time for somebody who lives around equator passes slower than a person in higher altitudes. Mass of fast passing object grows bigger in comparison to the same object if it stays in our frame of reference.

    In conclusion, objective reality is not solid rather it is different for different individuals.  Furthermore, quantum mechanics draws a much more bizarre picture of reality. According to quantum physics, the objects actually exist as potentialities and not certain forms. In other words, they are in superposition of different states. They exist in all possible states concurrently.  Somehow on the eye of the observer they turn into one solid state. We call this one state objective reality.

    Double slit

    www.blacklightpower.com/theory/DoubleSlit.shtml

    In Tonomura double-slit experiment, Electrons are sent one by one towards a barrier with two slits, and a interference pattern appears. It suggests that electrons are waves. However if we put a detector next to any of the slits to identify the slit  that electron actually passes through, electron acts like a particle and passes only through one slit. As a result we see just two bands indicating that electrons as particles past through two slots and hit the screen in two corresponding bands. The experiment has been performed with bigger object up to sodium atom with the same result. Check Double slit experiment for an animation of the experiment.

    Here, we conclude that electrons are in superposition of two states (wave and particle) while traveling along the path between electron source and the screen.

    Double slit 1

    However, the act of detection reduces the double character of the electron to just one observable character (particle). Therefore, we conclude that what we have assumed to be a solid objective reality during centuries is in fact a fuzzy state of different probabilities which superimpose on top of each other. The reality as we see is an artificial singular state chosen from a multiple and coherent states.  Somehow just one state of all the possible states is singled out and projected in our consciousness. This is what we call objective reality. As you can see this is a very shaky reality indeed.

    One interpretation suggests that we get an appropriate answer based on our question. In double slit experiment, when via our detection we ask which slit the electron as particle went through, we see the answer suitable to our question. The two bands on screen reveal the particle state of electron randomly passing through each slit. The sum of the results of many passages of electron as particle creates the two bands. If we do not ask this question the screen shows the interference pattern.
    This interpretation closely simulates the problem solving in consciousness realm. When we attend to a question or problem, appropriate search in mind domain reveals a  suitable answer to the question in hand and not  any other question. If we look for an answer to another question, the answer to that specific question comes to our attention.

    The other interpretation suggests that electron is in both particle and wave states simultaneously. By the act of detection we reduce the simultaneous states to just one state (particle). How are we to interpret this state reduction? How does the double character of electron turn just to one character (particle state) if we look for it? How and where does the so-called state reduction happen? We can categorize the possible answers as follows.

    External Reduction

    These are the answers that contribute the reduction to elements outside of our consciousness. It refers to state reduction possibilities occurring before the signals are received by us.

    a/ First possibility is that the electron is alive and can notice our presence and plays trick on us. As soon as it notices our detector, it reduces itself to just a particle. Then we have to define life and see if an elementary particle can have such a sophisticated mind. Biology dictates that sophisticated mind requires a complex neurological system. Even if we assume an elementary particle has a kind of consciousness, it cannot demonstrate the functionality of a sophisticated mind.

    b/ may be the detector itself is affecting the wave-particle duality of the electron and reduces it to just particle.   However, this is against Schrodinger’s principle. According to Schrodinger, the detector itself has to be in superposition and has to demonstrate all possible states that it can have. The multiple character of the detector coupled with dual character of electron only adds to confusion. It cannot reduce the electron to one state.

    c/ Many believe that encounter with other particles and rays in the vicinity dissolves the superposition of states at quantum level and changes the super-position to only one objective state observed in the macro-world. This is how it is supposed to work. The objects are not isolated systems. They are in an environment and in constant interaction with other particles and photons. For example, cosmic rays can interact with the particles in an object and reduce their states to one of the possible state. According to this school, this is why we do not see an object in a chaotic and superposition condition.

    Decoherence

    However, again other photons and particles are in superposition state themselves. They cannot bring us out of the chaos.

    Internal Reduction

    Alternatively, we may assume that information that is received by the experimenter/observer contains all the probable states (the whole information of the superposition). However, somewhere in our consciousness domain it is altered to just one logical state.

    Our sensory organs are in fact lenses and act just like the lens of a camera. We know about the eye lens. It takes the light waves and turns it to a spatial image which is projected into the retina. From there the image is transferred to our brain through nerve impulse (action potential). Our ear is doing the same thing. It turns the sound waves to nerve impulse and sends it to brain where it is interpreted as different sounds. What our skin actually senses is the vibration as well. If we keep a vibrating tuning fork close to our skin, it feels that the fork is in contact with our skin, although in reality there is no actual contact. The other two senses act as lenses as well. Therefore, we may conclude that the outside world is just in spectral form and made of waves. We may further conclude that our brains receive the impulses and interpret them as a solid (massive) outside world.

    Here again the incoming waves are supposed to be in superposition and deliver contradicting messages. Thus, collapse of information to one observed state cannot happen at this level. Unless we believe that we are reducing the the outside world’s superposition inside our brain and create an objective world according to our conscious and unconscious will. There are school of thoughts that are advocating the above . The movie Matrix is filmed based on this idea. I will further about this idea in the next chapter “Quantum Brain”

    Mixed Reduction

    Individuals acquire different perspectives from the same physical reality.  The Gestalt picture is a representation of this fact.

    Gestalt

    Gestalt Picture: Do you see a beautiful girl or an old lady?

    Obviously, a portion of state reduction is happening inside the brain of each individual. That is where we obtain our individual perspectives. However, there are many common elements between different individual’s perspectives from the same physical phenomenon. So the main portion of state reduction has to occur outside one’s consciousness.

    To find a solution we can look at the act of logical derivation and cognition. Our consciousness is filled with a jungle of data and memories that many times contradict themselves. The data that are embedded in the subconscious and beyond also exacerbate the disorder even further.

    We may conclude that during the act of logical thinking our consciousness selects and permits only the suitable data to appear in our awareness. That is how we draw a meaningful conclusion out of a chaotic situation. Our consciousness is responsible in creating an orderly concept or perspective out of countless disarrayed data .

    However, our individual perspectives though minutely different from others , basically evolve around a state of reality that is more or less shared by other conscious beings as well. So, there must be a reality out there beyond our consciousness. What created this decent reality. The question is how we arrive from an uncertain and disorder quantum frenzy, in micro-world to a definite and deterministic reality in macro-world.   If we are allowed to relate the act of logical thinking mechanism as an analogy for external state reduction as well, then we may envisage answers for the state reduction paradox.

    If consciousness is needed to create a logical conclusion out of information frenzy in mind, maybe we need similar mechanism to create a logical world out of the quantum frenzy. This leads us to the assumption that there is an awareness out there which tends to put everything in order in macrocosm. We can call this awareness universal consciousness. This is an alternative answer to de-coherence phenomenon explained above.

    Global Consciousness Project

    The global consciousness project was explained in the consciousness chapter.  In this project that have started since 1998, Random number generators (RNG) were used that can randomly generate zero or one. It is shown that individual’s intention can generate more of the desired number and change the randomness of RNG. This is another evidence that conscious intention can influence the physical elements.

    In the next stage of the project, random number generators were placed in 65 host sites around the world to study the effect of collective attention of people around the globe in upcoming events. These generators are connected to software that reads the output of random number generators and records a 200-bit trial sum once every second, continuously over months and years.  The details of the project can also be reviewed by clicking on the link above. The result clearly shows that collective attention of the people around the world can change the outcome of the random number generators and shift the result to one number. Underneath please review the inclination of the number generators during the recent United States election and Senator Obama’s victory.

    Global Consciousness
    http://noosphere.princeton.edu

    The above diagram indicates how the collective attention and focus of people throughout the world inclines the result of the random number generator to one side.  The above project not only demonstrates the effect of consciousness in physical events but also points to the presence of a collective consciousness amongst humans.
    On the other hand, physical events are happening all over the universe.  Universe have been physically active long before human race appear. Then we may assume that there is a universal consciousness which is responsible for state reduction.

    If we accept the presence of a universal consciousness out there, Then where is the boundary between individual consciousness and universal consciousness. Does our body skin delineate this boundary? The answer is not easy to reach. Many evidences like double slit experience and hypnotism suggests that our awareness field extends beyond our body limits. On the other hand, phenomena like intuition and telepathy suggests that the universal consciousness field can penetrate our own consciousness. Therefore, we may conclude that there is an overlap between the boundaries of two domains. It seems an undetermined gray area exists between these two fields. We may even go further and assume that our consciousness is a portion of this universal awareness. It seems that mixed state reduction is more logical and agreeable.

    Quantum Link

    If consciousness arises from quantum level, then where do we find quantum point of effect? Keep in mind that any biochemical reaction eventually is rooted in subatomic level where quantum effect is best demonstrated. There are different hypothesis about where awareness is linked to quantum domain. Here I will mention few of the possible sites.

    First, let us look at the anatomy and physiology of the nervous system at macroscopic level. A sensory impulse travels along peripheral nerves and reaches the brain. Inside the brain, peripheral nerves attach to many different neurons via its many fingers called dendrite. The contact points are called synapse. There are about 23.5 million synapses in the human brain.

    Neuron

    Dendrites of different neurons are not actually attached to each other. In higher magnification, there is a gap between the dendrites of different neurons that is about two hundred angstrom (2*10-8 m) wide. These gaps are called synaptic cleft.

    There is a debate about the mechanism that signal passes through synaptic cleft. Many believe that synapses are where all the communications and consciousness mechanisms reside.

    Because the body of neuron ends at the synapse, the normal mechanism (sodium pump action) cannot transfer the signal anymore. There is a minute gap between the dendrites. Electron is known to transfer the signal throughout the cleft and to the next neuron. However the released electron does not have enough energy to pass the gap. Its energy is enough for about seven angstroms trip.

    Evan Harris Walker 72 an American physicist believes that electron has to perform quantum tunneling to reach the next neuron. Quantum tunneling happens when a particle that does not have enough kinetic energy to pass through a barrier appears to be successful in passing it anyway.

    If quantum mechanics comes to the picture in the brain function, then we have to use quantum physical laws to evaluate brain processes. Maybe synapses are where state reduction or as it is called state vector collapse happens.

    Synaptic cleft

    Evan Walker like many other theorists believes in the presence of a universal consciousness that we all are somehow connected to and interact with it. He believes that consciousness is not tied directly to any of the usual construct of the physical world, like space, time, mass or fundamental forces. However it is tied to informational domain. He suggests that consciousness being considered as some quantum mechanical process going on in the brain. If consciousness is an informational field then synaptic clefts can be one of the locations to look for it.

    Pores in neuronal membranes can be another location for quantum mechanical effect. In an in vitro experiment Martin Fleischmann (1980) showed that passage of ions through pores of thin membrane (cell membrane simulation) have to be described within quantum electrodynamics scheme.  The pores have to be considered as a single quantum field to allow the passage of ions. A classical description cannot explain the passage of ions through cell membrane .Therefore the cell membrane can be considered another site for quantum effect where quantum mechanics principles can influence the function of consciousness.

    On the other hand, in bigger magnification we can see that the living matter is made of huge and dense network of protein filaments surrounded by water molecules. Mari Jibu and Kunio Yasue from Okayama Institute of quantum physics, call the protein filament water combination the fundamental structure of living matter.
    In 1979 Davydov found a solitary wave propagation along the chain of protein filaments. The wave is called Davydov soliton and its energy is kept free from thermalization. Jibu & Yasue call protein filament waves the first degree of freedom of the fundamental system of living matter 75. My conviction about such a non-vanishing wave is outlined in wave-particle chapter. To me a non-vanishing wave travels to a non-local realm during each oscillation where it refurbishes its energy. This non-local information-rich realm provides the quantum freedom.

    water dipole

    http://www.sciencelearn.org.nz

    Mari Jibu and Kunio Yasue also believe the spatial geometric configuration of water molecules provide the second quantum mechanical degree of freedom of living matters. Water is made of one oxygen atom and two hydrogen atoms.  As such a water molecule manifests a non-vanishing electric dipole moment. Water is abundant in the body. So it can deliver quantum effect all over the body of living things.

    If at deeper level the anatomy of nervous system is linked to quantum fields then the product of it (the consciousness) has to be studied within the quantum field theory contest. Consciousness cannot be studies at cellular or molecular level, we simply do not reach to any convincing result at these levels.

    Universal Consciousness

    Many authors believe in the presence of a universal consciousness where information exists in a coherent state. If our brain is connected to such an informational field then many transpersonal experiences such as hypnosis or telepathy can find logical explanations. Previously, I have assumed the proposed singularity as being the ultimate source of information. On the other hand, Louis de Broglie proposed that all matter exhibit both wave like and particle like properties. Furthermore, in previous chapters I have speculated that during wave motion or quantum tunneling particles join the proposed singularity while being absent from space-time. Therefore we can assume that us also being wave are in constant periodical contact with this ultimate source of information. This is one assumed way where our consciousness connects to a universal informational field.

    Spirituality

    Spirituality is an internal realization. We connect to it within our consciousness realm. My personal conviction is that the universal consciousness is fundamentally different from the god that is advocated by traditional religions. The god of main religions is human-like. He is a separate entity and lives somewhere in a space-time setting (heavens).The universal consciousness on the other hand, is where everything connects and cohere. It encompasses everything and in a sense it is everything. Spirituality then may mean sensing this universal consciousness and feeling the quittance with it. It may mean the easing sensation of oneness with the universe. The ultimate joy is the realization and connection with this realm. This is what meditation provides. Obviously, this universal awareness is completely different from conventional religions’ creator. It doesn’t need our worship, neither has it appoint representatives. It certainly doesn’t need our donations neither will it order us to kill each other.

    We need a new explanation for spirituality within a scientific framework. While all of us feel an internal acquaintance with spirituality, what traditional faiths advocate doesn’t seem to be very convincing. It seems that Religions have hi- jacked the spirituality and derailed it.
    Unfortunately science left spirituality as an orphan. This is because classical science could hardly pass the objective realm. In the absence of a scientific explanation people are clinging to old definitions and beliefs. These outdated beliefs are constantly creating disasters at family, regional, and global levels. The scientists’ hesitation to touch the subject only leaves the ambitious public to succumb to the ignorance promoted by old fashioned establishments. This means even more chaos and disasters for human race in coming years. Quantum mechanics and quantum field theory seems to be the right vehicle for exploring consciousness and spirituality. It seems that new insight and theories based on modern science is urgently needed to bring the human race out of the current chaos.

    Conclusion

    David Chalmers calls for introduction of a fundamental theory of consciousness. It seems that this theory can originate from the fundamentals of quantum mechanics. Many similarities between quantum mechanics and conscious awareness call for studying consciousness at quantum level. Likewise these resemblances suggest that quantum mechanical paradoxes can find solutions within a consciousness like domain. Classical way of thinking and solutions by introducing extra dimensions or multi-universes could not offer a satisfactory solution.

    To me, they are the product of reductionist physicists who are trying to explain everything using mostly classical physics concepts.  Mind you that Physical reality is sensed by the consciousness.  Our conscious awareness is our tool to explore physical world. We need to check our tool and obtain deep understanding about this device.
    In the coming chapter “Quantum Brain” I will explore the nature of consciousness even further.

    I am delving into the universal consciousness in the contest of singularity/space-time dual nature hypothesis. All along I have been trying to show that the encounter with singularity did not finish at the time of Big Bang. I have been claiming that singularity is an informational domain and ever present as a fundamental element of the world and us. More and more paradoxes arise that cannot be explained in the contest of objective reality alone. A universal informational domain can provide decent solutions for the paradoxes.

    On the other hand, twenty first century calls for new insights and beliefs based on today’s knowledge. Carrying over faiths and perceptions of our ancestors with their limited knowledge can only create catastrophe at personal, family, regional and global levels. New understanding and theories of spirituality is very needed to substitute the outdated and destructive faiths of the past.

    Further Reading

    Globus Gordon G. Brain and Being, John Benjamins Publishing Company, 2004
    Chalmers David: THE CONSCIOUS MIND (Oxford University Press, 1996)
    Culbertson James: THE MINDS OF ROBOTS (University of Illinois Press, 1963)
    Culbertson James: SENSATIONS MEMORIES AND THE FLOW OF TIME (Cromwell Press, 1976)
    Eccles John: EVOLUTION OF THE BRAIN (Routledge, 1989)
    Eccles John: THE SELF AND ITS BRAIN (Springer, 1994)
    Globus Gordon: THE POSTMODERN BRAIN (John Benjamins, 1995)
    Herbert Nick: ELEMENTAL MIND (Dutton, 1993)
    Lockwood Michael: MIND, BRAIN AND THE QUANTUM (Basil Blackwell, 1989)
    Marshall I.N., Zohar Danah: QUANTUM SELF : HUMAN NATURE AND CONSCIOUSNESS DEFINED BY THE NEW PHYSICS
    Penrose Roger: THE EMPEROR’S NEW MIND (Oxford Univ Press, 1989)
    Penrose Roger: SHADOWS OF THE MIND (Oxford University Press, 1994)
    Pribram Karl: LANGUAGES OF THE BRAIN (Prentice Hall, 1971)
    Pribram Karl: BRAIN AND PERCEPTION (Lawrence Erlbaum, 1990)
    Searle John: THE REDISCOVERY OF THE MIND (MIT Press, 1992)
    Stapp Henry: MIND, MATTER AND QUANTUM MECHANICS (Springer-Verlag, 1993)
    Yasue Kunio & Jibu Mari: QUANTUM BRAIN DYNAMICS AND CONSCIOUSNESS (John Benjamins, 1995)

    http://www.scaruffi.com/science/qc.html

    —————————————————————————————————————————————————————————————–

    (1) Underlined words are linked to appropriate sites for further explanation.

    The arguments presented are open for debate. The reader is encouraged to email his/her inputs to correct, modify or develop the contents. Please send your emails to; zpfields@yahoo.ca

    Gobal Warming 1

    Stop Global Warming

    http://www.universaltheory.org/ConciousnessReign.html

    18/08/2009 Posted by | science | Leave a comment

    Medida cuántica del estado con incertidumbre mínima (squeezing quantum measurement)

    Publicado por emulenews en 26 Enero 2009

    El principio de incertidumbre de Heinsenberg afirma que toda medida (observación) cuántica de dos propiedades complementarias (como la posición y el momento o velocidad) está sujeta a un error (incertidumbre) que se reparte entre ambas propiedades. El principio de Heisenberg nos da una cota mínima para el producto de dichas incertidumbres. En la práctica dicho producto es muchísimo mayor, varios órdenes de magnitud mayor. Se puede reducir la incertidumbre de una propiedad utilizando una técnica llamada “squeezing” (”apretujamiento”) cuántico, pero a costa de incrementar la de la complementaria. En enero de este año se publicó una técnica que permite evitar esto último y que permite reducir la incertidumbre de una propiedad sin afectar a la complementaria hasta alcanzar, casi, el límite teórico del principio de Heisenberg. Se basa en manipular el spín de tres fotones en una fibra óptica de tal forma que se produce una partícula compuesta, el “trifotón,” sobre la que se aplica el “squeezing.” Las aplicaciones de esta técnica en medidas de alta precisión, fotolitografía y procesamiento cuántico de la información son increíbles. Nos lo cuenta Geoff J. Pryde, “Quantum physics: Squeeze until it hurts,” Nature News and Views, 457: 35-36, 1 January 2009 , que nos comenta el trabajo de L. K. Shalm, R. B. A. Adamson, A. M. Steinberg, “Squeezing and over-squeezing of triphotons,” Nature 457: 67-70, 1 January 2009 .

    dibujo20090124quantumsqueezinginamplitudeandphaseofsinewaveLa existencia de un límite fundamental en la precisión de cualquier medida es un fenómeno puramente cuántico. Considera un haz de fotones que incide sobre un divisor de haz (beam splitter) que refleja el 50% de la luz que recibe y transmite el otro 50%. En Mecánica Clásica la intensidad de luz transmitida es exactamente la mitad de la incidente, con absoluta precisión. En Mecánica Cuántica la media estadística del número de fotones que atraviesa el divisor de haz cuando incide un haz formado por N fotones es de N/2 pero en un experimento concreto pueden pasar más fotones o menos. Es como tirar N veces una moneda. En media salen N/2 veces caras y otras tantas cruces. En la práctica el número de caras obtenido fluctúa y no es determinista en cada N tiradas que realicemos. Hay una probabilidad no nula de que en N tiradas sólo salgan caras, por ejemplo. Volviendo al fenómeno del “squeezing” cuántico, la figura lo ilustra utilizando la amplitud y la fase de una onda luminosa senoidal. A la izquierda tenemos una onda senoidal con cierta incertidumbre en amplitud que es constante en todo su periodo. Dicha incertidumbre genera una incertidumbre en su fase, en qué punto la onda cruza el eje de abcisas. La luz “apretujada” (squeezed) se presenta en la figura de la derecha. El error en la fase se ha reducido pero a costa de incrementar mucho el error en amplitud. La complementaridad cuántica en acción.

    dibujo20090124quantumsqueezinginpolarizationfortriphotons1El trabajo de los físicos canadienses Shalm, Adamson y Steinberg, de la Universidad de Toronto, se basa en medir la polarización de la luz de tres fotones entrelazados en un estado llamado trifotón. La polarización de un haz de luz es  un vector de tres componentes que en la representación de Stokes está dada por 3 parámetros S1, S2 y S3, aunque sólo 2 son independientes, situados en una esfera tridimensional S1*S1 + S2*S2 + S3*S3 = S0*S0, donde So es la intensidad del haz. En la versión cuántica estos parámetros se sutituyen por operadores complementarios, que no conmutan entre sí, como la posición y el momento, por lo que no es posible determinar con absoluta precisión estos 3 parámetros simultáneamente, si se reduce la incertidumbre en uno de ellos crecerá en los otros dos. En la figura se representan dos proyecciones de la esfera de Stokes, donde en azul tenemos los valores poco probables, con cuasiprobabilidad de Wigner negativa de -0.2, en rojo los más probables, con cuasiprobabilidad positiva de +0.7, y en blanco los valores con cuasiprobabilidad nula. En los experimentos se han preparado los trifotones con un grado de “squeezing” T variable entre 0 y 1.7, donde los valores mayores de 1 son valores “sobreapretujados” (”over-squeezing”). Para T=0, la incertidumbre en los ejes S1 y S2 es la misma. Conforme T crece, la incertidumbre en el eje S2 se reduce (aparecen 2 regiones azules a izquierda y derecha en la figura). Para estados con T>1, la incertidumbre se “retuerce” en la esfera, de la que no puede salir, formando 3 regiones azules más o menos equiespaciadas en la esfera (para T=1.7 en la figura). Estos estados tan “retorcidos,” llamados “NooN“,  son capaces de alcanzar el límite en las desigualdades de Heisenberg. La figura de abajo muestra la incertidumbre en los parámetros S1 (verde) y S2 (rojo) para 11 estados trifotón con T creciente de 0 a 1.7. Conforme T se aproxima al valor 1, la incertidumbre en S2 decrece, pero al alcanzar dicho valor empieza a crecer de nuevo.  Las curvas continuas muestran los valores teóricos. Los físicos canadienses sólo han sido capaces de obtener estos estados con una fidelidad del 0.68 del caso ideal. La fidelidad de estos estados se puede mejorar si se repite el proceso de “over-squeezing” para cada uno de los 3 ejes de polarización con lo que han logrado una fidelidad de 0.80 (los puntos “gordos” rojo y verde en la figura de abajo). Los investigadores esperan mejorar esta fidelidad en el futuro.

    dibujo20090124uncertaintyversussqueezinginpolarizationfortriphotons

    Publicado en Ciencia, Física, Mecánica Cuántica, Physics, Science, Óptica | Etiquetado: , , , , | 1 comentario

    18/08/2009 Posted by | science | Leave a comment

    El experimento de la doble rendija de Hitachi o el experimento más bello de toda la física

    Publicado por emulenews en 31 Enero 2009

    En 2002 se hizo una encuesta entre los lectores de la revista Physics World para votar el experimento más bello de toda la Física (The most beautiful experiment). Ganó la encuesta el experimento de Young de la doble rendija para mostrar la naturaleza dual onda-partícula del electrón (The double-slit experiment). He visto varias realizaciones del experimento pero en mi opinión la que mejor lo ilustra es la realizada por Akira Tonomura y sus colaboradores de Hitachi en 1989. Aquí tenéis el vídeo de youtube.

    En una palabra: espectacular.

    A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, H. Ezawa, “Demonstration of single-electron buildup of an interference pattern,” American Journal of Physics 57: 117-120, 1989 [copia gratis en ion.elte.hu].

    Página web del experimento en Hitachi.

    Publicado en Ciencia, Física, Mecánica Cuántica, Physics, Science | Etiquetado: , , , , | 2 Comentarios »

    18/08/2009 Posted by | science | Leave a comment

    La física pre-cuántica del Premio Nobel ‘t Hooft y la regla de Born

    Publicado por emulenews en 9 Febrero 2009

    “Dios no juega a los dados,” Albert Einstein. La frase “God does not play dice,” en realidad, no fue escrita por Einstein nunca. Lo que en realidad escribió en una carta a Max Born en 1926 (traducido al inglés) es lo siguiente (según Ralph Keyes, “The Quote Verifier: Who Said What, Where, and When,” St. Martin’s Press, 2006 ):

    I, at any rate, am convinced that He is not playing at dice.

    ¿Qué quería decir Einstein? Básicamente que creía que una teoría estadística clásica (teoría de variables ocultas) podría explicar la mecánica cuántica. Las desigualdades de Bell (”Lo decible y lo indecible en mecánica cuántica“) y los teoremas tipo Jauch-Piron (”Hidden Variables Revisited“) nos han convencido a la mayoría de que una teoría de variables ocultas así no existe. “Nunca digas nunca jamás” (Never Say Never Again). Bueno, si existe tal teoría será extremadamente “sutil” (”Subtle is the Lord“).

    El Premio Nobel Gerardus ‘t Hooft propuso hace una década una teoría de variables ocultas (precuántica) en la que la mecánica cuántica aparece como un fenómento emergente, no es necesaria postularla desde el principio, como nos lo aclaran en Massimo Blasone, Petr Jizba, Fabio Scardigli, “Can quantum mechanics be an emergent phenomenon?,” ArXiv preprint, 26 Jan 2009 .

    La mecánica cuántica y la teoría de la gravedad de Einstein son las dos teorías físicas más precisas que conocemos (verificadas experimentalmente en algunos experimentos con hasta 12 dígitos de precisión). Sin embargo, todo lo que sabemos sobre ellas es a “baja” energía (del orden de 1 TeV). La energía de Planck es millones de millones de millones de veces más grande. Prácticamente una energía sólo imaginable durante la Gran Explosión. La mayoría confía en la Mecánica Cuántica y cree que será aplicable a dichas escalas de energía, abogando por una Gravedad Cuántica (tipo Teoría de Cuerdas o similar) compatible con ella y que a baja energía nos de la Teoría General de la Relatividad. Solamente unos pocos piensan que la Mecánica Cuántica debe ser reemplazada a dichas energías por una Teoría Precuántica, posiblemente clásica, que puede que requiera o no modificar también la gravedad. G. ‘t Hooft, motivado por la termodinámica de los agujeros negros, propuso una teoría de este tipo en la que la gravedad (teoría relavista) no es alterada en “Equivalence relations between deterministic and quantum mechanical systems,” Journal of Statistical Physics 53: 323-344, 1988 , bien resumida en “Determinism beneath Quantum Mechanics,” ArXiv preprint, 16 Dec 2002 .

    Contrary to common belief, it is not difficult to construct deterministic models where stochastic behavior is correctly described by quantum mechanical amplitudes, in precise accordance with the Copenhagen-Bohr-Bohm doctrine. What is difficult however is to obtain a Hamiltonian that is bounded from below, and whose ground state is a vacuum that exhibits complicated vacuum fluctuations, as in the real world. (…) Theories of this kind may be essential for understanding causality at Planckian distance scales.

    La teoría precuántica de ‘t Hooft aproxima el espacio-tiempo por una estructura discreta, similar a una autómata celular, que le permite superar la mayoría de las restricciones de los teoremas que afirman la imposibilidad de una teoría de variables ocultas. Un proceso (disipativo) de pérdida de información hace que múltiples trayectorias clásicas a la escala de Planck sean indistinguibles a baja energía, con lo que la mecánica cuántica sólo nos ofrece resultados observables probabilísticamente cuando se suman múltiples historias independientes. En la terminología de Bell, la teoría de ‘t Hooft es una teoría de “beables” (abreviatura de “may be able” literalmente “tal vez capaz” que es “palabro” difícil de traducir). Esta teoría de “beables”  es una teoría no realista aunque local (relativista). La aparente no-localidad de la mecánica cuántica es un fenómeno emergente (dinámico) en la teoría.

    En la escala de Planck (EP), la dinámica es puramente determinista, con trayectorias bien definidas (clásicas). Conforme la energía baja (E), un enorme cantidad de información se pierde y aparece una descripción efectiva que tiene dos niveles. A nivel microscópico, corresponde a la ecuación de Schrödinger y a una acción a distancia (no local) que se describen por la mecánica cuántica. A nivel macroscópico, sin embargo, se obtiene una descripción clásica (relativista).

    dibujo20090208thooftprequantummechanicsasenergydecreases

    De esta manera, la teoría “mata dos pájaros de un sólo tiro.” Por un lado, resuelve el problema de cómo se realiza el límite clásico de la mecánica cuántica: no es tal, son teorías “independientes”. Por otro lado, resuelve el problema de la no-localidad, las acciones a distancia fantasmales (”spooky action at a distance”) y por qué este fenómeno no se observa a nivel clásico.

    Publicado en Ciencia, Física, Mecánica Cuántica, Physics, Science | Etiquetado: , , , , | 3 Comentarios »

    18/08/2009 Posted by | science | Leave a comment

    Trabajo de investigación excepcional de un físico teórico español

    Publicado por emulenews en 2 Marzo 2009

    Physics es la revista de divulgación de trabajos de investigación excepcionales publicados en revistas de la Sociedad de Física Americana (APS). Luis Miguel Robledo Martín, profesor titular del Departamento de Física Teórica de la Universidad Autónoma de Madrid, ha logrado aparecer en dicha revista gracias a que ha sido capaz de determinar el signo correcto de una expresión matemática complicada por una técnica innovadora. Nos lo cuentan John Millener, Ben Gibson, “Finding the missing sign,” Physics, Feb. 2009 , que se hacen eco del artículo técnico de L. M. Robledo, “Sign of the overlap of Hartree-Fock-Bogoliubov wave functions,” Phys. Rev. C 79: Art. No. 021302, Published February 20, 2009 .

    La aproximación de Hartree-Fock-Bogoliubov (HFB) se utiliza en física cuántica para aproximar el comportamiento de una partícula sujeta al efecto de muchas otras partículas como si estas generaran un campo promedio efectivo. De esta manera se evita tener que considerarlas de forma individual. La aproximación fue introducida por D.R. Hartree en 1928 y por V.A. Fock en 1930 , aunque se convirtió en una herramienta fundamental tras el trabajo de N.N. Bogoliubov en 1958 . Cuando se requiere un resultado más preciso, hay que aplicar la aproximación hasta segundo orden, lo que requiere combinar y solapar las funciones de onda de la aproximación HFB a primer orden. El signo del solape requiere evaluar una raíz cuadrada. El problema es saber qué signo tiene que ser utilizado para esta raíz cuadrada. En algunos problemas (en los que hay simetrías discretas) el resultado es independiente del signo (no importa el que sea). Pero en otros problemas (en los que estas simetrías están rotas) la aproximación no dice qué signo usar. El signo ha de ser calculado utilizando otra técnica.

    Luis Robledo ha utilizado una técnica muy elegante (que se basa en el uso de estados coherentes fermiónicos) con la que logra determinar el signo del término de solape sin ninguna ambigüedad. El signo depende del pfaffiano de una matriz antisimétrica. La nueva técnica es mucho más eficiente y sencilla de aplicar que otras técnicas alternativas, sin necesidad de recurrir al uso de matrices no hermíticas.

    El nuevo resultado tiene múltiples aplicaciones, como el uso de la aproximación HFB para el estudio de la dinámica de protones o neutrones en núcleos atómicos con número atómico impar (la suma del número de protones y neutrones). Enhorabuena, Luis.

    Fuente:  http://francisthemulenews.wordpress.com/category/mecanica-cuantica/page/2/

    18/08/2009 Posted by | science | , | Leave a comment

    The Clean Code Talks — Inheritance, Polymorphism, & Testing

    10/08/2009 Posted by | science | Leave a comment

    Uml Tutorial

    10/08/2009 Posted by | science | Leave a comment

    Repulsive quantum effect finally measured

    18:00 07 January 2009 by Stephen Battersby
    For similar stories, visit the Nanotechnology and Quantum World Topic Guides
    http://www.newscientist.com/article/dn16374-repulsive-quantum-effect-finally-measured.html
    A quantum effect that causes objects to repel one another – first predicted almost 50 years ago – has at last been seen in the lab.

    According to Harvard physicist Federico Capasso, a member of the group who measured the effect, it could be used to lubricate future nanomachines.

    The team detected the weak repulsive force when they brought together a thin sheet of silica and a small gold-plated bead, about half the diameter of a human hair.

    The force is an example of the Casimir effect, generated by all-pervasive quantum fluctuations.

    Strange attraction

    The simplest way to imagine the Casimir force in action is to place two parallel metal plates in a vacuum. Thanks to the odd quantum phenomenon, these become attracted to one another.

    It happens because even a vacuum is actually fizzing with a quantum field of particles, constantly popping in and out of existence. They can even fleetingly interact with and push on the plates.

    However, the small space between the two plates restricts the kind of particles that can appear, so the pressure from behind the plates overwhelms that from between them. The result is an attractive force that gums up nanoscale machines. (To learn more about the Casimir force see Under pressure from quantum foam.)

    Capasso says that the Casimir force needn’t be an enemy. “Micromechanics at some point will have to contend with these forces – or make use of them.”

    Reverse buoyancy

    In 1961, Russian theorists calculated that in certain circumstances, the Casimir effect could cause objects to repel one another – a scenario Capasso’s team have finally created experimentally. The team achieved this by adding a fluid, bromobenzene, to the setup.

    The Casimir attraction between the liquid and the silica plate is stronger than that between the gold bead and the silica, so the fluid forces its way around the bead, pushing it away from the plate.

    The effect is akin to the buoyancy we experience in the macro world – where objects less dense than water are held up by the liquid around them. But in this case the bromobenzene is less dense than the solid bead. “You could call it quantum buoyancy,” Capasso told New Scientist.

    The force he measured was feeble – amounting to just a few tens of piconewtons – but that is still enough to buoy up nanoscale objects.

    Quantum bearings

    “The next experiment we want to do is use a TV camera to track the motion of one of these spheres, then we should be able to see easily whether you have levitation.”

    Harnessing the repulsive Casimir force could provide a kind of lubrication to solve the problem of nanomachines becoming gummed up by the better-known attractive version, says Capasso.

    In theory you could instead use a liquid denser than the components to buoy them up, but that wouldn’t be practical. “These gizmos are usually made of metal, so you would have to use mercury,” he explains.

    Quantum buoyancy bearings could be used to build delicate sensors, such as a floating “nanocompass” to detect small-scale magnetic fields.

    Journal reference: Nature (DOI: 10.1038/nature07610)

    17/01/2009 Posted by | science | | Leave a comment

    Zeroing in on Hubble’s Constant

    Pasadena, CA In the early part of the 20th Century, Carnegie astronomer Edwin Hubble discovered that the universe is expanding. The rate of expansion is known as the Hubble constant. Its precise value has been hotly debated for all of the 80 intervening years. The value of the Hubble constant is a key ingredient in determining the age and size of the universe. In 2001, as part of the Hubble Space Telescope Key Project, a team of astronomers led by Carnegie’s Wendy Freedman determined precision distances to individual far-off galaxies and used them to determine that the universe is expanding at the rate of 72 kilometers per second per megaparsec. While the debate had previously raged over a factor-of-two uncertainty in the Hubble constant, Freedman and her team cut that uncertainty down to just 10%. And now that number is about to be decreased to 3% with the new Carnegie Hubble Program (CHP) using NASA’s space-based Spitzer telescope. Freedman, who is director of the Observatories of the Carnegie Institution, will lead the effort, which includes Carnegie staff members Barry Madore and Eric Persson, and Carnegie Spitzer Fellow, Jane Rigby.

     The Carnegie Hubble proposal was just selected by the Spitzer Science Center on behalf of NASA as a Cycle-6 Exploration Science Program using Spitzer. This space telescope currently takes images and spectra—chemical fingerprints—of objects by detecting their heat, or infrared (IR) energy, between wavelengths of 3 and 180 microns (a micron equals one-millionth of a meter). Most infrared radiation is blocked by the Earth’s atmosphere and thus it has to be detected from space. The Hubble Key Project observed distant objects primarily at optical wavelengths. In its post-cryogenic phase beginning in April 2009 Spitzer will have exhausted its liquid helium coolant but it will still be able to operate two of its imaging detectors that are sensitive to the near-infrared. This portion of the electromagnetic spectrum has numerous advantages, especially when observing Cepheid variable stars, the so-called “standard candles” that are used to determine distances to distant galaxies.

     “The power of Spitzer,” explained Freedman, “is that it will allow us to virtually eliminate the dimming and obscuring effects of dust. It offers us the ability to make the most precise measurements of Cepheid distances that have ever been made, and to bring the uncertainty in the Hubble constant down to the few percent level.”

     Cepheids are extremely bright, pulsating stars. Their pulsation periods are directly related to their intrinsic luminosities. So, by measuring their periods and apparent brightnesses their individual distances and therefore the distance to their parent galaxies can be determined. By considering the rate at which more distant galaxies are measured to be moving faster away from us in the universe we can calculate the Hubble constant and from that determine the size and the age of the universe.

     One of the largest uncertainties plaguing past measurements of the Hubble constant involved the distance to the Large Magellanic Cloud (LMC), a relatively nearby galaxy, orbiting the Milky Way. Freedman and colleagues will begin their 700 hours of observations refining the distance to the LMC using Cepheids newly calibrated based on new Spitzer observations of similar stars in our own Milky Way. They will then measure Cepheid distances to all of the nearest galaxies previously observed from the ground over the past century and by the Key Project, acquiring distances to galaxies in our Local Group and beyond. The Local Group, our galactic neighborhood, is comprised of some 40 galaxies. The team will be able to correct for lingering uncertainties again by observing in the near-IR. Systematic errors such as whether chemical composition differences among Cepheids might affect the period-luminosity relation, will be examined using the infrared data. Spitzer will begin to execute the Carnegie Hubble Program in June 2009 and continue for at least the next two years.

     “In the age of precision cosmology one of the key factors in securing the fundamental numbers that describe the time evolution and make-up of our universe is the Hubble constant. Ten percent is simply not good enough. Cosmologists need to know the expansion rate of the universe to as high a precision and as great an accuracy as we can deliver,” remarked Carnegie co-investigator, Barry Madore.

     ——————-

    The Spitzer Space Telescope was launched in August 2003. It detects energy from celestial objects in the infrared part of the spectrum, which is able to penetrate areas in space not visible in the optical spectrum such as dense clouds of gas and dust where stars form, new extrasolar planetary systems, and galactic centers. NASA’s Jet Propulsion Laboratory, Pasadena, CA, manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate, Washington. Science operations are conducted at the SpitzerScience Center at the California Institute of Technology. Caltech manages JPL for NASA. Seehttp://www.spitzer.caltech.edu

    Source: http://www.ciw.edu/news/zeroing_hubble_s_constant

    10/01/2009 Posted by | science | Leave a comment

    Lorentz Invariance Abnormality Could Overturn Building Block Of Einstein’s Relativity, Say Physicists

    Physicists at Indiana University have developed a promising new way to identify a possible abnormality in a fundamental building block of Einstein’s theory of relativity known as “Lorentz invariance.” If confirmed, the abnormality would disprove the basic tenet that the laws of physics remain the same for any two objects traveling at a constant speed or rotated relative to one another.

    IU distinguished physics professor Alan Kostelecky and graduate student Jay Tasson take on the long-held notion of the exact symmetry promulgated in Einstein’s 1905 theory and show in a paper to be published in the Jan. 9 issue of Physical Review Letters that there may be unexpected violations of Lorentz invariance that can be detected in specialized experiments.

    “It is surprising and delightful that comparatively large relativity violations could still be awaiting discovery despite a century of precision testing,” said Kostelecky. “Discovering them would be like finding a camel in a haystack instead of a needle.”

    If the findings help reveal the first evidence of Lorentz violations, it would prove relativity is not exact. Space-time would not look the same in all directions and there would be measurable relativity violations, however minuscule.

    The violations can be understood as preferred directions in empty space-time caused by a mesh-like vacuum of background fields. These would be separate from the entirety of known particles and forces, which are explained by a theory called the Standard Model that includes Einstein’s theory of relativity.

    The background fields are predicted by a generalization of this theory called the Standard Model Extension, developed by Kostelecky to describe all hypothetical relativity violations.

    Hard to detect, each background field offers its own universal standard for determining whether or not an object is moving, or in which direction it is going. If a field interacts with certain particles, then the behavior of those particles changes and can reveal the relativity violations caused by the field. Gravity distorts the fields, and this produces particle behaviors that can reveal otherwise hidden violations.

    The new violations change the gravitational properties of objects depending on their motion and composition. Objects on the Earth are always moving differently in different seasons because the Earth revolves around the Sun, so apples could fall faster in some seasons than others. Also, different objects like apples and oranges may fall differently.

    “No dedicated experiment has yet sought a seasonal variation of the rate of an object’s fall in the Earth’s gravity,” said Kostelecky. “Since Newton’s time over 300 years ago, apples have been assumed to fall at the same rate in the summer and the winter.”

    Spotting these minute variances is another matter as the differences in rate of fall would be tiny because gravity is a weak force. The new paper catalogues possible experiments that could detect the effects. Among them are ones studying gravitational properties of matter on the Earth and in space.

    The Standard Model Extension predicts that a particle and an antiparticle would interact differently with the background fields, which means matter and antimatter would feel gravity differently. So, an apple and an anti-apple could fall at different rates, too.

    “The gravitational properties of antimatter remain largely unexplored,” said Kostelecky. “If an apple and an anti-apple were dropped simultaneously from the leaning Tower of Pisa, nobody knows whether they would hit the ground at the same or different times.”

    Source:

    http://www.scientificblogging.com/news_releases/lorentz_invariance_abnormality_could_overturn_building_block_einsteins_relativity_say_physicists

    10/01/2009 Posted by | science | 2 Comments

    the physical body problem

    “How many bodies are required before we have a problem? G.E. Brown points out that this can be answered by a look at history.

    In eighteen-century Newtonian mechanics, the three-body problem was insoluble. With the birth of relativity arround 1910 and quantum electrodynamics in 1930, the two- and one-body problems became insoluble. And within modern quantum field theory, the problem of zero bodies(vacuum) is insoluble.

    So, if we are out after exact solutions, no bodies at all is already too many!”

         R.D. Mattuck – A guide to Feynmann diagrams in the many-body problem (2 ed., McGraw-Hill,  NY 1976)

    13/12/2008 Posted by | science | Leave a comment

    What’s is the Higgs boson,and why do we want to find it ?

    http://www.phy.uct.ac.za/courses/phy400w/particle/higgs.htm

    07/09/2008 Posted by | science | Leave a comment