Exclusive : Grave doubts over LIGOs discovery of gravitational waves

List members , am sure you will get your own Aha moment while reading this article - I certainly chuckled to myself :)) This is a science SCAM , no other way to describe it , because guess what - Nobel Prizes have been awarded as a result , proof for existence of black holes has been claimed (!!) and even science textbooks were updated :(( after this "decision" that was taken with indecent HASTE !

**I wish to add here that if LIGO were to honestly share their RAW data , AI tools could now be used to quickly get to the truth !!

Exclusive: Grave doubts over LIGO's discovery of gravitational waves

The news we had finally found ripples in space-time reverberated around the world in 2015. Now it seems they might have been an illusion

Physics 31 October 2018

By Michael Brooks

LIGO's detectors

LIGO’s detectors

Enrico Sacchetti

THERE was never much doubt that we would observe gravitational waves sooner or later. This rhythmic squeezing and stretching of space and time is a natural consequence of one of science’s most well-established theories, Einstein’s general relativity. So when we built a machine capable of observing the waves, it seemed that it would be only a matter of time before a detection.

In point of fact, it took two days. The Laser Interferometer Gravitational-Wave Observatory collaboration, better known as LIGO, switched on its upgraded detectors on 12 September 2015. Within 48 hours, it had made its first detection. It took a few months before the researchers were confident enough in the signal to announce a discovery. Headlines around the world soon heralded one of the greatest scientific breakthroughs of the past century. In 2017, a Nobel prize followed. Five other waves have since been spotted.

Video: Gravitational wave hunting

Physicist Stephen Fairhurst on how he searches for signals from merging black holes and neutron stars.

Or have they? That’s the question asked by a group of physicists who have done their own analysis of the data. “We believe that LIGO has failed to make a convincing case for the detection of any gravitational wave event,” says Andrew Jackson, the group’s spokesperson. According to them, the breakthrough was nothing of the sort: it was all an illusion.

The big news of that first sighting broke on 11 February 2016. In a press conference, senior members of the collaboration announced that their detectors had picked up the signature of gravitational waves emitted as a pair of distant black holes spun into one another.

The misgivings of Jackson’s group, based at the Niels Bohr Institute in Copenhagen, Denmark, began with this press conference. The researchers were surprised at the confident language with which the discovery was proclaimed and decided to inspect things more closely.

Their claims are not vexatious, nor do they come from ill-informed troublemakers. Although the researchers don’t work on gravitational waves, they have expertise in signal analysis, and experience of working with large data sets such as the cosmic microwave background radiation, the afterglow of the big bang that is spread in a fine pattern across the sky. “These guys are credible scientists,” says Duncan Brown at Syracuse University in New York, a gravitational wave expert who recently left the LIGO collaboration.

press conference

The first gravitational wave discovery was announced to the world on 11 February 2016

SAUL LOEB/AFP/Getty Images

Gravitational waves are triggered by the collision of massive objects such as black holes or neutron stars. They travel for billions of years, alternately squeezing and stretching the space-time in their path. Spreading out in all directions, they get weaker as they go, but they can be detected on Earth with a sufficiently sensitive instrument.

The LIGO collaboration built two such instruments, the Hanford detector in Washington state and the Livingston detector in Louisiana. A third, independent instrument called Virgo, located near Pisa, Italy, joined the others in 2017. These “interferometers” shoot lasers down two long tunnels, then reflect them back in such a way that the pulses should arrive at the same time. Passing gravitational waves will distort space-time, making one tunnel longer than the other, and throwing off the synchronisation.

By the time the waves wash over Earth, they are extremely weak, and the sort of change in tunnel length we expect is equivalent to about a thousandth of the diameter of a proton. That is far smaller than the disturbances that come from background seismic tremors and even the natural thermal vibrations of the detector hardware. Noise is a huge problem in gravitational wave detections.

Hence why there are detectors in different places. We know that gravitational waves travel at the speed of light, so any signal is only legitimate if it appears in all the detectors at the right time interval. Subtract that common signal, and what is left is residual noise unique to each detector at any moment, because its seismic vibrations and so on constantly vary.

This is LIGO’s main ploy for extracting a gravitational wave signal from the noise. But when Jackson and his team looked at the data from the first detection, their doubts grew. At first, Jackson printed out graphs of the two raw signals and held them to a window, one on top of the other. He thought there was some correlation between the two. He and his team later got hold of the underlying data the LIGO researchers had published and did a calculation. They checked and checked again. But still they found that the residual noise in the Hanford and Livingston detectors had characteristics in common. “We came to a conclusion that was very disturbing,” says Jackson. “They didn’t separate signal from noise.”

The Danish team wrote up their research and posted it online. After receiving no response from the LIGO collaboration, they submitted it to the Journal of Cosmology and Astroparticle Physics . The journal’s editor, Viatcheslav Mukhanov of the Ludwig Maximilian University in Munich, Germany, is a world-renowned cosmologist. The editorial and advisory boards include top physicists such as Martin Rees from the University of Cambridge, Joanna Dunkley at the University of Oxford and Andrei Linde of Stanford University in California.

New Scientist Default Image

Mukhanov sent the paper for review by suitably qualified experts. Reviewers’ identities are routinely kept secret so they can comment freely on manuscripts, but these were people with a “high reputation”, says Mukhanov. “Nobody was able to point out a concrete mistake in the Danish analysis,” he says. “There is no mistake.”

A storm in a teacup, still? General relativity is one of our most well-verified theories, after all, so there is every reason to think its prediction of gravitational waves is correct. We know LIGO should be sensitive enough to detect them. The instruments are finding the waves at exactly the right rate predicted by theory. So why worry about this noise?

Seek and ye shall find

There’s a simple answer to that question. Physicists have made mistakes before, mistakes that have been exposed only by paying close attention to experimental noise (see “Embarrassing noises”).

The first step to resolving the gravitational wave dispute is to ask how LIGO’s researchers know what to look for. The way they excavate signal from noise is to calculate what a signal should look like, then subtract it from the detected data. If the result looks like pure, residual noise, they mark it as a detection.

Working out what a signal should look like involves solving Einstein’s equations of general relativity, which tell us how gravitational forces deform space-time. Or at least it would if we could do the maths. “We are unable to solve Einstein’s equations exactly for the case of two black holes merging,” says Neil Cornish at Montana State University, a senior figure among LIGO’s data analysts. Instead, the analysts use several methods to approximate the signals they expect to see.

The first, known as the numerical method, involves cutting up space-time into chunks. Instead of solving the equations for a continuous blob of space, you solve them for a limited number of pieces. This is easier but still requires huge computing power, meaning it can’t be done for every possible source of gravitational waves.

A more general approach, known as the analytic method, uses an approximation of Einstein’s equations to produce templates for gravitational wave signals that would be created by various sources, such as black holes with different masses. These take a fraction of a second to compute, but aren’t accurate enough to model the final merger of two black holes. This endgame is modelled in an add-on calculation in which researchers tweak the parameters to fit the results of the initial analytic solution.

LIGO plant

To spy gravitational waves, LIGO’s detectors need a quiet environment

David Ryder/Bloomberg via Getty Images

This use of precalculated templates is a problem, Cornish concedes. “With a template search, you can only ever find what you’re looking for.” What’s more, there are some templates, such as those representing the waves created by certain types of supernovae explosions, that LIGO researchers can’t create.

That’s why Cornish prefers the third method, which he helped develop. It involves building a model from what he calls wavelets. These are like tiny parts of a wave signal that can be assembled in various ways. You vary the number and shape of the parts until you find a combination that removes the signal from the noise. Because wavelet analysis makes no assumptions about what created the gravitational wave, it can make the most profound discoveries. The wavelets “allow us to detect the unknown unknowns”, says Cornish. The downside is that they tell us nothing about the physical attributes of the detected source. For that, we have to compare the constructed signal against the templates or the numerical analysis.

The challenge with all three methods is that accurately removing the signal from the data requires you to know when to stop. In other words, you have to understand what the residual noise should look like. That is exceedingly tricky. You can forget running the detector in the absence of gravitational waves to get a background reading. The noise changes so much that there is no reliable background. Instead, LIGO relies on characterising the noise in the detectors, so they know what it should look like at any given time. “A lot of what we do is modelling and studying the noise,” says Cornish.

“The paper on the first detection used a data plot that was more ‘illustrative’ than precise”

Jackson is suspicious of LIGO’s noise analysis. One of the problems is that there is no independent check on the collaboration’s results. That wasn’t so with the other standout physics discovery of recent years, the Higgs boson. The particle’s existence was confirmed by analysing multiple, well-controlled particle collisions in two different detectors at CERN near Geneva, Switzerland. Both detector teams kept their results from each other until the analysis was complete.

By contrast, LIGO must work with single, uncontrollable, unrepeatable events. Although there are three detectors, they work almost as one instrument. And despite there being four data-analysis teams, they cannot work entirely separately, because part of the detection process involves checking that all the instruments saw the signal. It creates a situation in which each positive observation is an uncheckable conclusion. Outsiders have to trust that LIGO is doing its job properly.

Purely illustrative

And there are legitimate questions about that trust. New Scientist has learned, for instance, that the collaboration decided to publish data plots that were not derived from actual analysis. The paper on the first detection in Physical Review Letters used a data plot that was more “illustrative” than precise, says Cornish. Some of the results presented in that paper were not found using analysis algorithms, but were done “by eye”.

Brown, part of the LIGO collaboration at the time, explains this as an attempt to provide a visual aid. “It was hand-tuned for pedagogical purposes.” He says he regrets that the figure wasn’t labelled to point this out.

This presentation of “hand-tuned” data in a peer-reviewed, scientific report like this is certainly unusual. New Scientist asked the editor who handled the paper, Robert Garisto, whether he was aware that the published data plots weren’t derived directly from LIGO’s data, but were “pedagogical” and done “by eye”, and whether the journal generally accepts illustrative figures. Garisto declined to comment.

There were also questionable shortcuts in the data LIGO released for public use. The collaboration approximated the subtraction of the Livingston signal from the Hanford one, leaving correlations in the data – the very correlations Jackson noticed. There is now a note on the data release web page stating that the publicly available waveform “was not tuned to precisely remove the signal”.

Whatever the shortcomings of the reporting and data release, Cornish insists that the actual analysis was done with processing tools that took years to develop and significant computing power to implement – and it worked perfectly.

However, anyone outside the collaboration has to take his word for that. “It’s problematic: there’s not enough data to do the analysis independently,” says Jackson. “It looks like they’re being open, without being open at all.”

Brown agrees there is a problem. “LIGO has taken great strides, and are moving towards open data and reproducible science,” he says. “But I don’t think they’re quite there yet.”

The Danish group’s independent checks, published in three peer-reviewed papers, found there was little evidence for the presence of gravitational waves in the September 2015 signal. On a scale from certain at 1 to definitely not there at 0, Jackson says the analysis puts the probability of the first detection being from an event involving black holes with the properties claimed by LIGO at 0.000004. That is roughly the same as the odds that your eventual cause of death will be a comet or asteroid strike – or, as Jackson puts it,”consistent with zero”. The probability of the signal being due to a merger of any sort of black holes is not huge either. Jackson and his colleagues calculate it as 0.008.

Simultaneous signal

There is other evidence to suggest that at least one of the later detections came from a gravitational wave. On 17 August 2017, the orbiting Fermi telescope saw a burst of electromagnetic radiation at the same time as the LIGO and Virgo detectors picked up a signal. Analysis of all the evidence suggests that both signals came from the brutal collision of two neutron stars.

The double whammy makes LIGO’s detection seem unequivocal. Even here, though, the Danish group is dissenting. They point out that the collaboration initially registered the event as a false alarm because it coincided with what’s known as a “glitch”. The detectors are plagued by these short, inexplicable bursts of noise, sometimes several every hour. They seem to be something to do with the hardware with which the interferometers are built, the suspension wires and seismic isolation devices. Cornish says that LIGO analysts eventually succeeded in removing the glitch and revealing the signal, but Jackson and his collaborators are again unconvinced by the methods used, and the fact there is no way to check them.

What are we to make of all this? Nothing, apparently. “The Danish analysis is just wrong,” insists Cornish. “There were very basic mistakes.” Those “mistakes” boil down to decisions about how best to analyse the raw data (see “How to catch a wave”).

Not everyone agrees the Danish choices were wrong. “I think their paper is a good one and it’s a shame that some of the LIGO team have been so churlish in response,” says Peter Coles, a cosmologist at Maynooth University in Ireland. Mukhanov concurs. “Right now, this is not the Danish group’s responsibility. The ball is in LIGO’s court,” he says. “There are questions that should be answered.”

Brown thinks the Danish group’s analysis is wrong, but worth engaging with. And Cornish admits the scrutiny may not be a bad thing. He and his colleagues plan to put out a paper describing the detailed properties of the LIGO noise. “It’s the kind of paper we didn’t really want to write because it’s boring and we’ve got more exciting things to do.” But, he adds, it is important, and increased scrutiny and criticism may in the end be no bad thing. “You do have to understand your noise.”

Coles himself doesn’t doubt that we have detected gravitational waves, but agrees with Jackson that this cannot be confirmed until independent scientists can check the raw data and the analysis tools. “In the spirit of open science, I think LIGO should release everything needed to reproduce their results.”

Jackson is unconvinced that explanatory papers will ever materialise – the collaboration has promised them before, he says. “This LIGO episode continues to be the most shocking professional experience of my 55 years as a physicist,” he says. Not everyone would agree – but for a discovery of this magnitude, trust is everything.

Embarrassing noises

In 2014, the operators of the BICEP2 telescope made an announcement so momentous there was talk of a Nobel prize. A year later however, far from making their way to Stockholm for the award ceremony, they were forced to admit they had been fooled by an embarrassing noise.

Situated at the South Pole, BICEP2 had been scanning the cosmic microwave background, the pattern of radiation left on the sky from light emitted soon after the big bang. The big announcement was that it had found that gravitational waves had affected the pattern in such a way that proved a core theory of cosmology. The theory in question was inflation, which says the universe went through a period of superfast growth right after the big bang. For almost four decades it had been unproven. Now, suddenly, inflation’s supporters were vindicated.

Except awkward warnings emerged within weeks, suggesting that cosmic dust clouds had scattered the radiation in a way that fooled the BICEP2 researchers. In the end, the team’s estimate of the amount of dust present and the analysis of the kind of noise the dust would produce both proved to be flawed. Noise can hoodwink even the smartest. That is why, despite LIGO being a highly respected collaboration, there is good reason to take questions about its noise analysis seriously (see main story).

How to catch a wave

Output from gravitational wave detectors is full of noise. Disentangling the signal requires decision–making – and poor ones could be disastrously misleading.

The best weapon in the arsenal is known as a Fourier transform. This splits a signal into various frequency components and converts it into a power spectrum, which details how much of the signal’s power is contained in each of those components. This can be done with a window function, a mathematical tool that operates on a selected part of the data. Whether or not to use one is at the heart of the disagreement over LIGO’s results (see main story).

Andrew Jackson’s dissenting team at the Niels Bohr Institute in Denmark chose not to use a window function, a decision that LIGO’s Neil Cornish describes as a “basic mistake”. Jackson says they didn’t use one because it subtly alters the Fourier-transformed data in a way that can skew the results of subsequent processing.

Even with the Fourier analysis done, judgements must be made about the noise in the detectors. Is it, for example, distributed in a predictable pattern equivalent to the bell-shaped Gaussian distribution? And does it vary over time or is it “stationary”? The appropriate techniques for processing the data are different depending on the answers to these questions, so reliably detecting gravitational waves depends on making the right assumptions. Jackson’s group says the decisions made during the LIGO analysis are opaque at best, and probably wrong.

This article appeared in print under the headline “Wave goodbye?”

Leader: “ The LIGO collaboration must respond to gravitational wave criticism ”

Read more: https://www.newscientist.com/article/mg24032022-600-exclusive-grave-doubts-over-ligos-discovery-of-gravitational-waves/#ixzz6aqLB0Xp3

The LIGO collaboration must respond to gravitational wave criticism

Science advances through open scrutiny of results. Even if they're wrong, the questions raised about the 2016 gravitational wave discovery should be answered

Physics | Leader 31 October 2018

conference

SAUL LOEB/AFP/Getty Images

THE news that gravitational waves had been detected reverberated beyond the halls of physics. Confirming a long-standing prediction of Einstein’s general theory of relativity, the discovery presaged a new era in cosmology. Then came the doubts. No experiment could reproduce the claimed signal, and theorists began to question whether Joseph Weber’s massive aluminium bars, set up at the University of Maryland, could really have been moved by ripples in space-time.

The theory and experimental practice of gravitational-wave detection have advanced immeasurably since these events of the summer of 1969, and few would bank on history repeating itself. 


Read more: https://www.newscientist.com/article/mg24032022-800-the-ligo-collaboration-must-respond-to-gravitational-wave-criticism/#ixzz6aqMoxvoh

Regards

Folks , now what follows below is my own idea about a high potential use case of AI for processing a certain type of data , it's an idea which I honestly believe is original :)) - yes , I actually checked the Internet and verified that it's a unique concept...also , given that humans (already) can't match AI anymore in that capability (too bad , but 100% true) & this is a thought process that I've already floated on another forum , before testing it here :-

"An excellent use case for Artificial Intelligence (AI) Tools , could be to mine all the Nobel Prizes and patents granted over last 50 years and also all the bids for Nobel Prize or patent applications that were rejected , during that same period .

By comparing Nobel prizes or patents granted with bids/applications rejected , the AI would be able to detect any biases that might have crept in - biases in the context of prevailing paradigms vs alternative frameworks . This is important because the peer review system that is used to "screen" all Nobel Prize bids/patent applications , is driven by consensus .There can be different views about this , but if one thinks very deeply , consensus is actually more suitable for politics and democratic processes than cutting edge innovation , which by it's very nature is on the "fringe" of any existing framework , or maybe even beyond that...so the question is - what if there was a bias towards existing paradigms ? What might be the opportunity cost to society for all the meritorious Nobel Prize bids/patent applications that were rejected ?

On the other hand , there's a need to distinguish between game-changing ideas & the ones that are merely "incremental innovations" upon existing frameworks . Therefore , is there a case for "gradation" in the Patent regime itself , one that uses AI to distinguish between game changers and incremental innovators ?? Food for thought perhaps ?

**Taking this thought process one level higher , how about using AI itself to determine the best possible use cases of AI ? Wouldn't that be one of the best applications for AI ?

Another interesting application could be to pick up past data from any given field ,say- climate data of the past 100 years & run AI to establish which climate model, best describes historical data .

Similarly , vast amounts of raw data are being generated by all the telescopes and institutions like CERN , NASA and LIGO . So take all the science frameworks that explain astronomical phenomena and run that raw data through AI , to determine which framework most accurately describes past data.

Using AI , an absolutely objective re-assessment of ALL the frameworks and paradigms used by our civilisation could be conducted , thus eliminating any possibility of a selection bias in ANY field of human endeavour . It's most exciting to THINK that AI already gives us the ability to conduct a much needed and overdue "audit" of ALL frameworks and paradigms currently in existence and being used by mankind ! Even some paradigms once discarded might yet prove their worth & still better ones may keep emerging . Don't the "harsh learnings" from this year 2020 ,give us an adequate cause for some course corrections ??"

**I am therefore , now proposing a new term for using AI itself , for determining AI use cases :

"AI raised to the power of AI" or "AI Squared"

Does this make sense - does it resonate ? I'm keen to get some feedback . Thanks !

***Also wish to share this important announcement of a FREE webinar on
"THRIVE II" coming up this Saturday - 17th October . I truly believe it can add a LOT of value to the kind of topics we discuss on this forum . Please do attend if you like the concept of THRIVE :-

Regards

Folks , I sincerely hope at least some of you are watching the THRIVE Webinar - I certainly am :))

You will find that some of the themes in THRIVE resonate deeply with the topics we discuss on this forum .

Regards

For the best explanation of gravity I have seen to date, see Ionel Dinu's paper:

@Soretna , I like the way Ionel Dinu has dissected Newton's framework , especially the surprising confusion scientists have over a basic concept like "density" .

Regards

@Soretna , I found the following 3 articles that provide a very effective rebuttal to the very existence of these gravity monsters , the so called "Black Holes" in our Universe . Plasma z-pinch structures in space are repeatedly being MISINTERPRETED (!!) as Black Holes :-

https://www.thunderbolts.info/wp/2019/02/06/do-black-holes-matter/

Do Black Holes Matter ?

NGC 4889 (center) is thought to be home to a supermassive black hole 21 billion times the size of the Sun, with an event horizon 130 billion kilometers in diameter. The bright cross-shape is a foreground star in the Milky Way. Credit: NASA/ESA.

Feb 6, 2019

There is supposed to be a supermassive black hole in the core of the Milky Way that is tearing stars apart.

Black holes cannot be directly seen, but astrophysicists continue to maintain that they exist because of their putative effects. They assume that matter can accelerate and compress until it is “spaghettified”, or stretched, whereupon it is torn apart and reconfigured by intense gravity fields.

Almost all (more than 95%) of galaxies are thought by astronomers to be home to one or more black holes. Since matter spins around a black hole at extreme velocities, consensus opinions state that it heats up from friction, generating X-rays and ultraviolet light. It is those emissions that are interpreted as indirect evidence for black holes.

Previous Pictures of the Day take issue with that model. The terminology, itself, is highly speculative and ambiguous. To say that X-rays and ultraviolet light are created in gravity fields is to betray an ignorance. Experiments in the laboratory create those energies by accelerating charged particles in an electric field.

There is no experiment that can provide evidence for matter collapsed to “near infinite density”. Rather, Bennet pinches (z-pinches) in plasma-state material form plasmoids. When the electric flux inside double layers within galactic circuits gets too high, there is a sudden “short circuit” that draws energy from the surrounding space. That energy could be concentrated from hundreds of cubic light years and then discharged in a burst of cosmic lightning, generating X-rays or flares of ultraviolet light.

Instead of a black hole, X-ray radiation from a plasmoid in the Milky Way’s heart is a charged particle accelerator, so electrons spiral in the electromagnetic fields and give off X-rays. The diffuse currents then flow out of the poles and back toward the galaxy’s equatorial plane, spiraling back toward the core.

There is no evidence that matter can collapse to “near infinite density” under gravity’s influence. Black holes are phantoms that can never be observed, since their so-called “event horizons” are impenetrable, allowing no direct observations. No light can escape, so they are invisible at any wavelength. They are pure theory, and have no basis for existence in the natural world.

Black holes are the subject of many previous Picture of the Day articles. The short story, from an Electric Universe perspective, is that black holes are a misperception. The descriptive terminology used by researchers is problematic, relying on loose interpretations. Ambiguous lexical labels such as space/time, singularities, infinite density, and other non-quantifiable ideas, make what should be realistic investigations into a kind of irony.

Using the concept of infinity is all over the work on black holes: infinitely weak gravity is compared to infinitely dense black holes. Such ideas mask the fact that no scientist understands gravity, or how mass is expressed by matter, or how matter expresses gravity. They especially ignore the Electric Universe.

Since stars are plasma phenomena, they are governed by electricity and not by gravity, alone. Since stars are loads in an externally powered circuit, a drop in circuit power means a drop in output, so a star will disappear—it enters a dark mode state. Variations in electric power mean variations in how a star behaves.

Astronomers believe gravity is their only tool, and, in their world, no force can stop the collapse of any mass great enough to form a black hole. However, gravity’s effect is so small that it is effectively non-existent when compared to the electric force. It is charge separation that holds stars together, preventing their collapse. Even thermonuclear fires are not needed for a star to “live”. The standard model of stars fails because gravity is the wrong tool.

Stephen Smith

https://www.thunderbolts.info/wp/2019/02/06/cosmic-magnetic-fields-the-ultimate-challenge-to-gravity-centric-cosmology-space-news/

Cosmic Magnetic Fields – The Ultimate Challenge to Gravity-Centric Cosmology | Space News

sschirott February 6, 2019 Multimedia

It has been one of the greatest surprises of the Space Age – powerful magnetic fields pervade the cosmos. Mainstream astronomers and astrophysicists do indeed acknowledge pervasive cosmic magnetism, but they did NOT predict it, and the realization has come begrudgingly. In this episode, we explore why powerful magnetic fields associated with countless celestial objects is the clearest indication that we live in an Electric Universe.

Plasma z-pinch

birkeland currents, Earth, Electric Universe evidence?, Electric Universe theory, element, element transmutation, EU theory, nebula, Space, transmutation, z pinch / October 24, 2015

plasma z-pinch pinches electricity magnetic diagram birkeland filaments currentsA plasma z-pinch, also known as zeta pinch (comes from or related to Bennett pinch, theta-pinch) is a very important plasma/electromagnetic mechanism for the Electric Universe theory. Also for Plasma Cosmology and standard science.
plasma cosmology eu theory Bennett z pinch
The basic z pinch idea seems to be magnetic fields generated to or in plasma by electricity, these electromagnetic effects tighten, concentrates or pinch the plasma closer together.

Plasma z pinches are of special interest to the EU theory, especially those with the natural result of pairs of Birkeland currents interacting with themselves. Perhaps forming nebula such as the Southern Crab nebula, Butterfly nebula and stars.
z-pinch can electrical power space formations birkeland currents

It is confirmed that the movement of electric charges in plasma forms electromagnetic fields that constrict the current. As previous Picture of the Day articles point out, the constricted channel is known as a “Bennett pinch,” or “z-pinch.” The pinched electric filaments remain coherent over long distances, forming helical structures that can transmit power through space. That phenomenon is what scientists refer to as flux ropes. They also create electromagnetic structures called “plasmoids”. The glowing blobs observed in Encke’s tail are plasmoids.
Solar Plasmoids | thunderbolts TPOD

The Z-pinch is an application of the Lorentz force, in which a current-carrying conductor in a magnetic field experiences a force. One example of the Lorentz force is that, if two parallel wires are carrying current in the same direction, the wires will be pulled toward each other. In a Z-pinch machine the wires are replaced by a plasma, which can be thought of as many current-carrying wires. When a current is run through the plasma, the particles in plasma are pulled toward each other by the Lorentz force, thus the plasma contracts. The contraction is counteracted by the increasing gas pressure of the plasma.

As the plasma is electrically conductive, a magnetic field nearby will induce a current in it.
Z-pinch | wikipedia

electric sun model theory es plasma z pinch Electric Universe theory

Astronomers see in this image “thick and turbulent clouds of gas and dust” that are “being sculpted into pillars by radiation and winds from hot, massive stars.” The language is misleading and inappropriate. The pillars are not turbulent, they have the characteristic tornadic column form of parallel z-pinch plasma discharge filaments. Z-pinches are the most efficient scavengers of matter in space, having an attractive force that falls linearly with distance from the axis. (Gravity falls off exponentially with the square of the distance). Gravity and turbulence give no explanation for the surprising tornadic forms.

The notion of “triggered collapse” is merely hand waving. The inset image shows the telltale polar jet aligned with the z-pinch column. The glowing “ionization front” is not principally a photo-ionization or collisional effect but the glow of a plasma double-layer, energized by electric current. The nearby Herbig-Haro object, HH399, exhibits the typical thin polar corkscrew jet seen in more detail in the Herbig-Haro 49/50 below.

The heated, glowing plasma in these jets can extend for trillions of miles. They do not explosively dissipate in the vacuum of space because of the electromagnetic “pinch effect” of the electric current flowing along the jet. The spiral shape is that of Birkeland current filaments, which are the universal power transmission lines.
Assembling the Solar System | holoscience

z-pinch formation origin universe plasma double layers

AlfvĂ©n proposed the electrical circuit diagram for a star. It is in the form of a simple Faraday motor, which explains why the Sun’s equatorial plasma is driven fastest. It also explains the presence of the circumstellar disk, formed and held there by electromagnetic forces and not by weak gravity. And the problem of transfer of rotational energy does not arise because the entire system is held by powerful electromagnetic forces and driven like an electric motor. (The same explanation, of course, applies on a much grander scale to the anomalous rotation of the disk of spiral galaxies). When the star-forming z-pinch subsides, gravity is not able to retain the disk for long and current flowing in the disk (the stellar wind) sweeps the space clear.
Assembling the Solar Systemholoscience | holoscience

plasma cosmology plasma z pinch birkeland currents filaments

When electric current passes axially along a cylindrical conductor, a magnetic field is created that surrounds the conductor and tends to crush the cylinder. This effect is called the magnetic pinch and is commonly seen in the laboratory. If the conductor is a multi-layered collection of concentric cylinders, this crushing effect can produce a discharge between two or more layers of the structure.

Many instances have recently been reported of stars exhibiting surrounding rings. The bright star Fomalhaut has now been discovered to have one. Another classic double hour-glass structure is visible in images of the object called the Southern Crab Nebula. It is a well known property of plasma that it can operate in two visible modes (arc and glow) and one invisible mode (dark mode).

So in some objects all of the structure described above presents itself. In others parts of the plasma composition are in dark mode and so are not visible. For example in the object shown in Fig. 9, the outer, larger extent of the plasma is very diffuse – the electric current density being insufficient to illuminate it as well as the inner regions shown in the lower right of that figure.

There are literally dozens of objects that exhibit this morphology such as planetary nebula MyCn 18, which contains a ring around its central object.
Magnetic Pinch – An Electric Universe View of Stellar and Galactic Formation | Donald E Scott (link to PDF)

zeta pinch eu theory theta-pinch

That parallel currents attract each other was known already at the times of Ampere. It is easy to understand that in a plasma, currents should have a tendency to collect to filaments. In 1934, it was explicitly stated by Bennett that this should lead to the formation of a pinch. The problem which led him to the discovery was that the magnetic storm producing medium (solar wind with present terminology) was not flowing out uniformly from the Sun. Hence, it was a problem in cosmic physics which led to the introduction of the pinch effect 
 However, to most astrophysicists it is an unknown phenomenon. Indeed, important fields of research, e.g., the treatment of the state in interstellar regions, including the formation of stars, are still based on a neglect of Bennett’s discovery more than half a century ago 
 present-day students in astrophysics hear nothing about it.
Hannes AlfvĂ©n quote – Stars in an Electric Universe | (link to PDF)

z pinch nebula space butterfly plasma Electric Universe theory

The extremely large output of power and energy was accomplished by converting the accelerator’s electrical output into a dense, ionized gas (plasma) called a z-pinch, which efficiently produces X-rays. A z-pinch is so named because it creates a magnetic field that, as it contracts around ionized gas, pinches it vertically along (to a mathematician) the z-axis.
The z-pinch | Sandia National Laboratories

plasma z pinch Hen Wings 2-437 Electric Universe

Birkeland currents align themselves with the ambient magnetic field direction. The hourglass z-pinch shape has been confirmed in the magnetic field of a star-forming region. And in laboratory z-pinch experiments, the plasma tends to form a number of “beads” along the axis, which “scatter like buckshot” once the discharge subsides.
Assembling the Solar System | holoscience

Z pinches and black holes

Closer to the black hole, heat generated by molecular collisions tears atoms apart and the disk glows in extreme ultraviolet and X-rays. This is what is referred to as a black hole’s “corona”.

No direct evidence exists for matter compressed to nearly infinite density. Instead, it is Z-pinches in plasma filaments forming plasmoids that energize stars and galaxies. When charge density is too high, double layers form, catastrophically releasing their excess energy in bursts of X-rays or flares of ultraviolet light.

That electric charge flow in plasma generates magnetic fields that constrict the current channel. Pinched electric filaments remain coherent over long distances, spiraling around each other, and forming helical structures that can transmit power through space. Those filaments are the jets seen in galaxies and stars.

No gigantic masses compressed into tiny volumes are necessary, and those flares and jets are easily generated with the proper experimental equipment. There are other factors that should be considered when analyzing observational data before resorting to super-dense objects and other fantastical ideas. It could be that there are lightning flashes taking place in the center of Markarian 335.
Flash in the Pan | Thunderbolts TPOD

Copyright © 2020 Everything Is Electric

Regards

Folks , here is an excellent Q&A with David La Point which you will like - I have especially highlighted his comments on Black Holes :-

"
”Black Holes?’

David LaPoint

On black holes and planetary orbits.
I believe that what are called black holes are actually the effects of
the magnetic fields confining matter. The orbits of the planets are
also due to the magnetic fields as you see with the outer orbitals on
the acrylic sheet in my video. Remember this experiment was conducted
within the magnetic field of the Earth so I have to contend with
effects of that. If we could do that experiment in deep space far away
from other magnetic fields I believe we could increase that orbital
distance a quite a bit. It can be very difficult to totally let go of
the gravity concept when it comes to orbitals in space. It was
difficult for me to let gravity go. But it actually makes a lot more
sense that the orbits of planets are do to magnetic confinement once
you see that it is possible and it places objects in clearly defined
stable orbits."

Regards

1 Like

List members, "The Emperor has no clothes" :)) Just see the video about this HILARIOUS faux pas , involving none other than CERN and it's much HYPED Large Hadron Collider (LHC) machine - enjoy :-

Regards

1 Like

Folks , it's high time we did the math , and calculated the true MAGNITUDE of the opportunity loss for mankind , when the high priests of the Einstein worshipping cult at CERN are on the verge of securing over 30 BILLION DOLLARS (yes you read that right !) of funding to build the next generation particle collider .

You know , when I was a kid , I used to love reading this comic book series on "Hagar the Horrible" , a humourous story about an ancient Viking warrior and his family . The funniest part of that comic was a game that their kids used to play called "ZONK MY KONK" - basically head banging ,where 2 kids would come charging in from opposite sides , wearing their metallic helmets and SMASHING their heads together , with one kid getting knocked out ! By the way , no offence meant to anyone with a Scandinavian background - Vikings were the greatest adventurers and explorers in recorded history .

**I think the idea of building yet another particle collider after the ABJECT FAILURE of the Large Hadron Collider (LHC) is no more intelligent than that age old game of Zonk my Konk !! :))))

To put things in perspective , 0.01 % of 30 billion is 3 million . You know what - let's play a little mind game , right here , right now :-

Let's become generous and say "OK , go ahead you governments of major nations and grant all but 3 million of the 30 billion to CERN , give the remaining 0.01% i.e. 1/10,000th or 3 million dollars of that amount to our group (yes !!) for our line of research . Then set the timer for next 10 years and let the race begin , the results being declared in 2030 . In this modern contest between David and Goliath (rather King Kong or Godzilla) , David will once again prevail !" :))

Oh , in that hypothetical situation of our research group being granted 3 million dollars , how would we actually use that money ? By the way , had it not been for the SENSITIVITY of the topics we research , we could have collectively raised more than 3 million dollars , just by crowdfunding/crowdsourcing . Anyhow , here's a high level strawman budgeting estimate (please help me refine it) :-

  1. We would definitely need drones , at least a couple of good ones , for our aerial surveys of "potential sites"
  2. Photogrammetry software tools to analyse the visual data from drones .
  3. Possibly , VRICON tools to develop composite 3D maps of "interesting sites"
  4. Ground penetrating Radar (GPR)
  5. LIDAR equipment
  6. A small sturdy boat that could sail in Arctic waters
  7. A small lab setup where we could do some "tinkering around" with our concepts of Electric Universe theory .
  8. A decent telescope
  9. A subscription of GlobalXplorer and MAXAR for Satellite images
  10. While there is a lot of "FREE" Open Source Software available , a good AI tool like GPT-3 would really help

***I also have a name in mind for this research group , which I had proposed before :-

Hollow Earth Research Organisation or H.E.R.O .

Doesn't H.E.R.O. sound a lot better than C.E.R.N. ?!

Well , that's all I've got for now , but guess what - all of the above can fit inside 3 million dollars ! Is anybody listening ?? Ha Ha !! Aha :))

Regards

This brings me back to the critical nature of Gregory Hodowanec's cheap gravimeter research. There is so much that can be done with this to debunk the garbage flowing through the main stream presently.

@Soretna , couldn't agree more...also , with the recent announcement of a Nobel prize for the theory of formation of "so called" Black Holes , the fundamental sciences have been pushed into a black hole of ignorance , from which even intelligence cannot escape :))

This "Academic Event" has real world consequences , because over 30 BILLION DOLLARS of taxpayer money (from various countries) are soon going to change hands , due to this QUID PRO QUO with C.E.R.N. That's more than the GDP of many nations , it's a HELL of a LOT of good money that's going to get flushed down the toilet - right in front of our eyes , folks !

So , we all need to be greatly CON-C.E.R.N.ed (!!) - pun intended , about this SHAM of a Nobel prize award . Ha ha !

Regards

Folks , I am hoping we the people of this group are able to keep our eyes on the ball with regard to this topic...with all the other major events happening in the world though , it can sometimes be challenging !

However , please mark my words - this debate MUST be taken to it's logical conclusion - the future progress of humanity depends on the resolution of this most important question in science .

I don't want to sound cynical , but this question has a lot more long term significance for mankind than those "major events" :))

Regards