adsense top

Monday, 20 January 2014

Climate change: The case of the missing heat


The biggest mystery in climate science today may have begun, unbeknownst to anybody at the time, with a subtle weakening of the tropical trade winds blowing across the Pacific Ocean in late 1997. These winds normally push sun-baked water towards Indonesia. When they slackened, the warm water sloshed back towards South America, resulting in a spectacular example of a phenomenon known as El Niño. Average global temperatures hit a record high in 1998 — and then the warming stalled.



 



For several years, scientists wrote off the stall as noise in the climate system: the natural variations in the atmosphere, oceans and biosphere that drive warm or cool spells around the globe. But the pause has persisted, sparking a minor crisis of confidence in the field. Although there have been jumps and dips, average atmospheric temperatures have risen little since 1998, in seeming defiance of projections of climate models and the ever-increasing emissions of greenhouse gases. Climate sceptics have seized on the temperature trends as evidence that global warming has ground to a halt. Climate scientists, meanwhile, know that heat must still be building up somewhere in the climate system, but they have struggled to explain where it is going, if not into the atmosphere. Some have begun to wonder whether there is something amiss in their models.


Now, as the global-warming hiatus enters its sixteenth year, scientists are at last making headway in the case of the missing heat. Some have pointed to the Sun, volcanoes and even pollution from China as potential culprits, but recent studies suggest that the oceans are key to explaining the anomaly. The latest suspect is the El Niño of 1997–98, which pumped prodigious quantities of heat out of the oceans and into the atmosphere — perhaps enough to tip the equatorial Pacific into a prolonged cold state that has suppressed global temperatures ever since.


“The 1997 to ’98 El Niño event was a trigger for the changes in the Pacific, and I think that’s very probably the beginning of the hiatus,” says Kevin Trenberth, a climate scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. According to this theory, the tropical Pacific should snap out of its prolonged cold spell in the coming years.“Eventually,” Trenberth says, “it will switch back in the other direction.”


Stark contrast


On a chart of global atmospheric temperatures, the hiatus stands in stark contrast to the rapid warming of the two decades that preceded it. Simulations conducted in advance of the 2013–14 assessment from the Intergovernmental Panel on Climate Change (IPCC) suggest that the warming should have continued at an average rate of 0.21 °C per decade from 1998 to 2012. Instead, the observed warming during that period was just 0.04 °C per decade, as measured by the UK Met Office in Exeter and the Climatic Research Unit at the University of East Anglia in Norwich, UK.


The simplest explanation for both the hiatus and the discrepancy in the models is natural variability. Much like the swings between warm and cold in day-to-day weather, chaotic climate fluctuations can knock global temperatures up or down from year to year and decade to decade. Records of past climate show some long-lasting global heatwaves and cold snaps, and climate models suggest that either of these can occur as the world warms under the influence of greenhouse gases.



Nate Mantua/NOAA



But none of the climate simulations carried out for the IPCC produced this particular hiatus at this particular time. That has led sceptics — and some scientists — to the controversial conclusion that the models might be overestimating the effect of greenhouse gases, and that future warming might not be as strong as is feared. Others say that this conclusion goes against the long-term temperature trends, as well as palaeoclimate data that are used to extend the temperature record far into the past. And many researchers caution against evaluating models on the basis of a relatively short-term blip in the climate. “If you are interested in global climate change, your main focus ought to be on timescales of 50 to 100 years,” says Susan Solomon, a climate scientist at the Massachusetts Institute of Technology in Cambridge.


But even those scientists who remain confident in the underlying models acknowledge that there is increasing pressure to work out just what is happening today. “A few years ago you saw the hiatus, but it could be dismissed because it was well within the noise,” says Gabriel Vecchi, a climate scientist at the US National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. “Now it’s something to explain.”


Researchers have followed various leads in recent years, focusing mainly on a trio of factors: the Sun1, atmospheric aerosol particles2 and the oceans3. The output of energy from the Sun tends to wax and wane on an 11-year cycle, but the Sun entered a prolonged lull around the turn of the millennium. The natural 11-year cycle is currently approaching its peak, but thus far it has been the weakest solar maximum in a century. This could help to explain both the hiatus and the discrepancy in the model simulations, which include a higher solar output than Earth has experienced since 2000.


An unexpected increase in the number of stratospheric aerosol particles could be another factor keeping Earth cooler than predicted. These particles reflect sunlight back into space, and scientists suspect that small volcanoes — and perhaps even industrialization in China — could have pumped extra aerosols into the stratosphere during the past 16 years, depressing global temperatures.


Some have argued that these two factors could be primary drivers of the hiatus, but studies published in the past few years suggest that their effects are likely to be relatively small45. Trenberth, for example, analysed their impacts on the basis of satellite measurements of energy entering and exiting the planet, and estimated that aerosols and solar activity account for just 20% of the hiatus. That leaves the bulk of the hiatus to the oceans, which serve as giant sponges for heat. And here, the spotlight falls on the equatorial Pacific.


Blowing hot and cold





Just before the hiatus took hold, that region had turned unusually warm during the El Niño of 1997–98, which fuelled extreme weather across the planet, from floods in Chile and California to droughts and wildfires in Mexico and Indonesia. But it ended just as quickly as it had begun, and by late 1998 cold waters — a mark of El Niño’s sister effect, La Niña — had returned to the eastern equatorial Pacific with a vengeance. More importantly, the entire eastern Pacific flipped into a cool state that has continued more or less to this day.


This variation in ocean temperature, known as the Pacific Decadal Oscillation (PDO), may be a crucial piece of the hiatus puzzle. The cycle reverses every 15–30 years, and in its positive phase, the oscillation favours El Niño, which tends to warm the atmosphere (see ‘The fickle ocean’). After a couple of decades of releasing heat from the eastern and central Pacific, the region cools and enters the negative phase of the PDO. This state tends towards La Niña, which brings cool waters up from the depths along the Equator and tends to cool the planet. Researchers identified the PDO pattern in 1997, but have only recently begun to understand how it fits in with broader ocean-circulation patterns and how it may help to explain the hiatus.


One important finding came in 2011, when a team of researchers at NCAR led by Gerald Meehl reported that inserting a PDO pattern into global climate models causes decade-scale breaks in global warming3. Ocean-temperature data from the recent hiatus reveal why: in a subsequent study, the NCAR researchers showed that more heat moved into the deep ocean after 1998, which helped to prevent the atmosphere from warming6. In a third paper, the group used computer models to document the flip side of the process: when the PDO switches to its positive phase, it heats up the surface ocean and atmosphere, helping to drive decades of rapid warming7.


A key breakthrough came last year from Shang-Ping Xie and Yu Kosaka at the Scripps Institution of Oceanography in La Jolla, California. The duo took a different tack, by programming a model with actual sea surface temperatures from recent decades in the eastern equatorial Pacific, and then seeing what happened to the rest of the globe8. Their model not only recreated the hiatus in global temperatures, but also reproduced some of the seasonal and regional climate trends that have marked the hiatus, including warming in many areas and cooler northern winters.


“It was actually a revelation for me when I saw that paper,” says John Fyfe, a climate modeller at the Canadian Centre for Climate Modelling and Analysis in Victoria. But it did not, he adds, explain everything. “What it skirted was the question of what is driving the tropical cooling.”



Univ. Washington/IPCC



That was investigated by Trenberth and John Fasullo, also at NCAR, who brought in winds and ocean data to explain how the pattern emerges4. Their study documents how tropical trade winds associated with La Niña conditions help to drive warm water westward and, ultimately, deep into the ocean, while promoting the upwelling of cool waters along the eastern equatorial region. In extreme cases, such as the La Niña of 1998, this may be able to push the ocean into a cool phase of the PDO. An analysis of historical data buttressed these conclusions, showing that the cool phase of the PDO coincided with a few decades of cooler temperatures after the Second World War (see ‘The Pacific’s global reach’), and that the warm phase lined up with the sharp spike seen in global temperatures between 1976 and 1998 (ref. 4).


“I believe the evidence is pretty clear,” says Mark Cane, a climatologist at Columbia University in New York. “It’s not about aerosols or stratospheric water vapour; it’s about having had a decade of cooler temperatures in the eastern equatorial Pacific.”


Heated debate


Cane was the first to predict the current cooling in the Pacific, although the implications weren’t clear at the time. In 2004, he and his colleagues found that a simple regional climate model predicted a warm shift in the Pacific that began around 1976, when global temperatures began to rise sharply9. Almost as an afterthought, they concluded their paper with a simple forecast: “For what it is worth the model predicts that the 1998 El Niño ended the post-1976 tropical Pacific warm period.”


It is an eerily accurate result, but the work remains hotly contested, in part because it is based on a partial climate model that focuses on the equatorial Pacific alone. Cane further maintains that the trend over the past century has been towards warmer temperatures in the western Pacific relative to those in the east. That opens the door, he says, to the possibility that warming from greenhouse gases is driving La Niña-like conditions and could continue to do so in the future, helping to suppress global warming. “If all of that is true, it’s a negative feedback, and if we don’t capture it in our models they will overstate the warming,” he says.


There are two potential holes in his assessment. First, the historical ocean-temperature data are notoriously imprecise, leading many researchers to dispute Cane’s assertion that the equatorial Pacific shifted towards a more La Niña-like state during the past century10. Second, many researchers have found the opposite pattern in simulations with full climate models, which factor in the suite of atmospheric and oceanic interactions beyond the equatorial Pacific. These tend to reveal a trend towards more El Niño-like conditions as a result of global warming. The difference seems to lie, in part, in how warming influences evaporation in areas of the Pacific, according to Trenberth. He says the models suggest that global warming has a greater impact on temperatures in the relatively cool east, because the increase in evaporation adds water vapour to the atmosphere there and enhances atmospheric warming; this effect is weaker in the warmer western Pacific, where the air is already saturated with moisture.


Scientists may get to test their theories soon enough. At present, strong tropical trade winds are pushing ever more warm water westward towards Indonesia, fuelling storms such as November’s Typhoon Haiyan, and nudging up sea levels in the western Pacific; they are now roughly 20 centimetres higher than those in the eastern Pacific. Sooner or later, the trend will inevitably reverse. “You can’t keep piling up warm water in the western Pacific,” Trenberth says. “At some point, the water will get so high that it just sloshes back.” And when that happens, if scientists are on the right track, the missing heat will reappear and temperatures will spike once again.



Climate change: The case of the missing heat

Astrophysics: Black hole found orbiting a fast rotator

Classification spectrum of the Be star MWC[thinsp]656.


Stars of spectral type ‘Be’ are often found with neutron stars or other evolved analogues, but a black-hole companion has never been spotted before. Optical emission from a black hole’s surroundings has given it away.


Stellar-mass black holes have all been discovered through X-ray emission, which arises from the accretion of gas from their binary companions (this gas is either stripped from low-mass stars or supplied as winds from massive ones). Binary evolution models also predict the existence of black holes accreting from the equatorial envelope of rapidly spinning Be-type stars123 (stars of the Be type are hot blue irregular variables showing characteristic spectral emission lines of hydrogen). Of the approximately 80 Be X-ray binaries known in the Galaxy, however, only pulsating neutron stars have been found as companions234. A black hole was formally allowed as a solution for the companion to the Be star MWC 656 (ref. 5; also known as HD 215227), although that conclusion was based on a single radial velocity curve of the Be star, a mistaken spectral classification6 and rough estimates of the inclination angle. Here we report observations of an accretion disk line mirroring the orbit of MWC 656. This, together with an improved radial velocity curve of the Be star through fitting sharp Fe II profiles from the equatorial disk, and a refined Be classification (to that of a B1.5–B2 III star), indicates that a black hole of 3.8 to 6.9 solar masses orbits MWC 656, the candidate counterpart of the γ-ray source AGL J2241+4454 (refs 56). The black hole is X-ray quiescent and fed by a radiatively inefficient accretion flow giving a luminosity less than 1.6 × 10−7 times the Eddington luminosity. This implies that Be binaries with black-hole companions are difficult to detect in conventional X-ray surveys.



Astrophysics: Black hole found orbiting a fast rotator

A cosmic web filament revealed in Lyman-α emission around a luminous high-redshift quasar

Simulations of structure formation in the Universe predict that galaxies are embedded in a ‘cosmic web’1, where most baryons reside as rarefied and highly ionized gas2. This material has been studied for decades in absorption against background sources3, but the sparseness of these inherently one-dimensional probes preclude direct constraints on the three-dimensional morphology of the underlying web. Here we report observations of a cosmic web filament in Lyman-α emission, discovered during a survey for cosmic gas fluorescently illuminated by bright quasars45 at redshiftz ≈ 2.3. With a linear projected size of approximately 460 physical kiloparsecs, the Lyman-α emission surrounding the radio-quiet quasar UM 287 extends well beyond the virial radius of any plausible associated dark-matter halo and therefore traces intergalactic gas. The estimated cold gas mass of the filament from the observed emission—about 1012.0 ± 0.5/C1/2 solar masses, where C is the gas clumping factor—is more than ten times larger than what is typically found in cosmological simulations56, suggesting that a population of intergalactic gas clumps with subkiloparsec sizes may be missing in current numerical models.



A cosmic web filament revealed in Lyman-α emission around a luminous high-redshift quasar

Speech means using both sides of our brain

In this file photo a patient is examined during a scan session at the University Hospital in Liege, Belgium.


We use both sides of our brain for speech, according to a new study that could rewrite therapies for those who have lost the ability to speak after a stroke.


“Our findings upend what has been universally accepted in the scientific community — that we use only one side of our brains for speech,” said Bijan Pesaran, an associate professor in New York University’s Centre for Neural Science and the study’s senior author.


“In addition, now that we have a firmer understanding of how speech is generated, our work toward finding remedies for speech afflictions is much better informed,” Pesaran said.


Scientific community has largely believed that both speech and language are lateralised — that is, we use only one side of our brains for speech, which involves listening and speaking, and language, which involves constructing and understanding sentences.


However, the conclusions pertaining to speech generally stem from studies that rely on indirect measurements of brain activity, raising questions about characterising speech as lateralised.


To address this matter, the researchers directly examined the connection between speech and the neurological process.


Specifically, the study relied on data collected at NYU ECoG, a centre where brain activity is recorded directly from patients implanted with specialised electrodes placed directly inside and on the surface of the brain while the patients are performing sensory and cognitive tasks.


Here, the researchers examined brain functions of patients suffering from epilepsy by using methods that coincided with their medical treatment.


The researchers tested the parts of the brain that were used during speech. Here, the study’s subjects were asked to repeat two “non-words” — “kig” and “pob.”


Using non-words as a prompt to gauge neurological activity, the researchers were able to isolate speech from language.


An analysis of brain activity as patients engaged in speech tasks showed that both sides of the brain were used — that is, speech is, in fact, bi-lateral.


“Now that we have greater insights into the connection between the brain and speech, we can begin to develop new ways to aid those trying to regain the ability to speak after a stroke or injuries resulting in brain damage,” said Pesaran.


“With this greater understanding of the speech process, we can retool rehabilitation methods in ways that isolate speech recovery and that don’t involve language,” Pesaran said.


The study was published in the journal Nature.



Speech means using both sides of our brain

Mechanical Engineering


Saturday, 18 January 2014

How to Keep Your Devices Juiced Up on the Road

How to Keep Your Devices Juiced Up on the Road


You can almost date a hotel room’s last remodel by the number and location of jacks. In the old days, you’d be lucky to find a usable power outlet — one that wasn’t taken up with lamps or TV. Plug-in phone jacks were nonexistent.


We then went to bedside phone jacks and later to desktop phone jacks and on to Ethernet jacks for supposed broadband; eventually, we went over to WiFi, with no jacks. Next came flashy flat-screen, media-dripping TVs with no inputs — HDMI inputs came later.


Only very recently were we provided bedside lighting resplendent with power outlets.


We have now ended up, in the newest hotels in late 2013, with powered-USB jacks peppering in-room, now-redundant media centers — redundant already, because we no longer need hotel-provided media. Instead, we carry our own.


Scary stories abound about these USB power jacks — you don’t know what they are connected to, behind the walls, thus making them potentially unusable for anyone concerned with security. Smartphones and tablets, meanwhile, are getting too big to rest on those tiny bedside tables along with coffee cups, redundant landlines and complimentary chocolate.


We don’t necessarily want copious bedside power anymore — we want it over by large, flat, improvised charging-station surfaces, like a desk.


What to do? Well, the simple answer to this conundrum is to become self-sufficient. Bring your own media, screens, Internet connection, comms and power. I’ve spent a lot of time on the road. Here’s how I reckon you need to do it.



 


Step 1: Purchase a power strip dedicated to your travel kit.


Such strips don’t have to be beige. I bought a svelte-looking, 6-outlet slim-line black one that matches my kit for US$4.99 at Fry’s Electronics. It’s one of the best purchases I’ve ever made — I only need to find one spare in-room socket now.


Step 2: Assemble an e-kit.


This should be a dedicated container at your home or work space that contains cables and adapters that you need to grab when headed away from base. Moreover, you need to duplicate it, because wandering around the house looking for spares before a trip does not work. It’s time-consuming, and you will forget something.


 


Just try finding a unique power supply for a device in a foreign country — it could take you days of fruitless pavement pounding. I know; I’ve done it.


Tip: Cables that you need include HDMI for media out; an Ethernet cable, for the now-rare occasion that a hotel provides Internet that way; and — especially important — power. You need a USB cable plus a wall adapter for each device that uses one as well as dedicated tablet and laptop bricks — if they use them.


Warning: Basic, phone-supplied USB phone chargers will not, on the whole, charge tablets. They don’t provide enough juice.


Step 3: Buy a foreign-power plug converter before you leave the U.S. if you’re headed overseas.


You only need one — matched to where you are going — because you’ve got your power strip. The best place to buy plug converters is in airports, air-side.


Tip: Perform a Google search for the plug style of the country you’re going to. Check a few sources. TripAdvisor can be helpful.


Almost all gadget-type electronics, as opposed to appliances, are dual-voltage, so you don’t need a transformer. Verify dual voltage by checking on the existing brick or adapter, not the device.


Step 4: Acquire some spare batteries and a power bank.


Buy Lithium Polymer rather than Lithium Ion whenever you can. Polymer technology is lighter. Fry’s sells a 13 Ah Tenergy power bank with USB sockets for $99.99.


Don’t forget some spares for headphones, if your headphones use them and are noise-canceling. Keep the batteries with the headphones for swift access on darkened flights.


Step 5: Prepare a dedicated auto kit.


Obtain a cigarette-lighter adapter with attached USB cable for in-vehicle smartphone charging.


A mini-inverter with cigarette lighter plug is ideal for in-vehicle laptop and tablet charging. I use the CyberPower 100W, which runs only $20 at Fry’s. It also has a USB jack.


Want to Ask a Tech Question?


Is there a piece of tech you’d like to know how to operate properly? Is there a gadget that’s got you confounded? Please send your tech questions to me, and I’ll try to answer as many as possible in this column.


And use the Talkback feature below to add your comments!



How to Keep Your Devices Juiced Up on the Road

Suchitra Sen’s presence was felt everywhere: Sharmila Tagore

Sharmila Tagore expressed shock at the death of legendary Bengali actress Suchitra Sen. Speaking to IBN18′s Editor-in-chief Rajdeep Sardesai, Tagore said that Suchitra carried an aura of stardom and that her presence was felt everywhere. Sen, who was undergoing treatment at a nursing home in Kolkata for serious respiratory problems, breathed her last today, she was 82. The 82-year old actress, under treatment at the Belle Vue Clnic for the last 26 days following serious respiratory problems, breathed her last at 8.25 am, attending doctor Subir Mondal had said.





Total Post Views : 1


The post Suchitra Sen’s presence was felt everywhere: Sharmila Tagore appeared first on Satbi.


Pentagon Wary of New Chinese Missile Vehicle

Last week, China’s military took its new “ultra-high speed missile vehicle” — or “hypersonic glide vehicle,” if you prefer — for its first test drive, raising eyebrows among U.S. defense officials.


The hypersonic aircraft, capable of maneuvering at a mindboggling 10 times the speed of sound — that’s more than 7,500 miles per hour — is designed to deliver warheads through U.S. missile defenses, according to the Pentagon. Call it a great leap forward in China’s military capacity.


The Pentagon has dubbed the aircraft “WU-14″; Wu, incidentally, is China’s ninth-most common surname.


[Source: FreeBeacon.com via The Age]



 


Canada Cries Foul Over Google Privacy


Perhaps taking a cue from the slew of European countries lashing out against Google’s privacy policies — including France, which hit Google with a fine last week — Canada’s federal policy watchdog announced that Google violated national privacy law.


While the EU complaints typically center on Google’s melding of various privacy policies into a single cross-platform policy, Canada’s beef is with Google’s targeted online advertising.


The Office of the Privacy Commissioner of Canada has been investigating Google’s ad practices for a year, prompted by a man who complained that he was being targeted based on a medical condition: He had searched for a device to assist with sleep apnea, and subsequently saw ads for such products popping up.


Such targeted ads are hardly shocking — this has been Google’s modus operandi for years — but this particular instance was deemed inappropriate because it pertained to “sensitive information.”


[Source: The Globe and Mail]


Syrian Electronic Army Still Doing Syrian Electronic Army Things


The pro-Bashar al-Assad Syrian Electronic Army accessed Microsoft employee email accounts.


Only a “small number” of accounts were compromised, Microsoft told The Verge — but enough to enable the SEA to post three internal emails that were plucked from Outlook Web accounts. The emails discuss the recent SEA hacking of a handful of Twitter accounts authored by Microsoft.


Phishing — sending messages with links that can implant malware onto a computer — was the SEA’s method, according to Microsoft, which added that no customer information was compromised.


However, the SEA noted that Microsoft’s password security was far from staunch: “A Microsoft employee wanted to make his password more stronger [sic], so he changed it from ‘Microsoft2′ to ‘Microsoft3′ #happened,” an SEA spokesperson tweeted.


[Source: The Verge]


HP to Launch Huge Phone in India


HP plans to launch a “voice tablet” with a 6-inch screen in India next month.


The device, which will run on Android, signals HP’s return to the smartphone market — to the extent that a device with a 6-inch screen can be called a “phone.” It can play high-definition video, and it is capable of taking HD photos with its front- or rear-facing cameras.


The phablet push is not unique to HP. LG’s most recent curved smartphone, the LG G Flex, also has a 6-inch display.


[Source: The Washington Post]


Huawei Says Security Concerns Are Bunk


Chinese telecom giant Huawei denied claims that its equipment is particularly susceptible to hacking.


The declaration came after German magazine Der Spiegel – among those with unfettered access to Edward Snowden’s document bounty — reported last month that the National Security Agency had installed “back doors” into Huawei equipment.


It is “groundless” to report that Huawei is any more vulnerable than other telecoms, a company spokesperson said.



Pentagon Wary of New Chinese Missile Vehicle

Facebook Gets More Topical

Facebook may be feeling Twitter’s heat, judging from its latest attempt to emulate it with a new trending topics feature. Both teens and advertisers have demonstrated growing interest in Twitter in recent months, which surely must be unnerving for Facebook. Its new Trending functionality could run into problems with Facebook users, however, as it’s one more uninvited guest on the News Feed page.


Facebook Gets More Topical


Facebook on Thursday announced Trending, a new product designed to surface relevant and timely conversations occurring on the network.


Trending, displayed to the right of the user’s News Feed, will feature a list of topics that have recently spiked in popularity, personalized according to subjects of interest to the user. It will include topics that are trending across Facebook in general, as well.


 


Facebook trending topics
(click to enlarge)


Each topic will be accompanied by a headline explaining why it is trending. Users can click on the headline to see posts from friends or Pages dedicated to the subject.


Trending is rolling out in select countries, including the U.S.


Initially, it will be viewable only on the full website, but Facebook plans to test the new feature for mobile as well.


Similar to Twitter


Trending topics is one of Twitter’s most popular features. Facebook presumably hopes its own offering will translate into more users — or at least stickier users. Or, at the very least, that it will keep users from defecting to Twitter.


Among things likely concerning Facebook is last fall’s survey by Piper Jaffray, which found that Facebook’s popularity with teens was slipping. Just 23 percent named it their most important social network, down from 33 percent six months earlier.


Twenty-six percent of teens surveyed said that Twitter had become their most important social network.


Perhaps more worrisome, Twitter has grown in popularity among advertisers, according to a survey by Ad Age and RBC Capital Markets published at the end of last year. It found that approximately six in 10 marketers advertising on Twitter planned to increase their Twitter ad budgets significantly over the next year.


A Smart Move


Facebook’s addition of Trending is a smart one, especially because it has the potential to increase national conversions, said Kevin Green, EVP of global digital strategy and partnerships at Racepoint Global.


“Over the past few months, Facebook has been trying to bring the human element back into the platform and decrease the amount of low-value content that, in essence, was the downfall of MySpace, he told TechNewsWorld.


“Trending topics removes the requirement of the user to Like pages that share information that matters to them. It’s as much about being informed as it is engaging in a conversation with trusted peers,” he said.


In fact, Facebook may one-up Twitter with its approach to Trending, suggested CA CreativeDigital Director Eric Chang.


“Facebook is much more visual than Twitter, which can easily translate into users filtering through and consuming content faster and with more ease,” he told TechNewsWorld. “Secondly, Facebook’s content tends to maintain longer lifecycles, which leads to higher chances of social sharing.”


It Could Backfire


That said, Trending does have the potential to offend some of Facebook’s multitude of users.


Facebook has had similar features in the past that didn’t work out very well, noted Sang Nam, associate professor of communications at Quinnipiac University.


For some, Trending’s placement in the News Feed could seem “aggressive,” he told TechNewsWorld.


“People are already annoyed by random advertisements by sponsors and will be more annoyed about trending topics,” Nam said.


Also, there is plenty of anecdotal evidence that Facebook users are tired of being exposed to unwanted posts, even by friends or people in their social circle, he continued.


“Facebook might want to compete against Twitter, but Facebook and Twitter are totally different social media networking tools,” Nam insisted. “I just don’t think it’s wise for Facebook to move in this direction.”



Facebook Gets More Topical

Suchitra Sen's presence was felt everywhere: Sharmila Tagore

Sharmila Tagore expressed shock at the death of legendary Bengali actress Suchitra Sen. Speaking to IBN18′s Editor-in-chief Rajdeep Sardesai, Tagore said that Suchitra carried an aura of stardom and that her presence was felt everywhere. Sen, who was undergoing treatment at a nursing home in Kolkata for serious respiratory problems, breathed her last today, she was 82. The 82-year old actress, under treatment at the Belle Vue Clnic for the last 26 days following serious respiratory problems, breathed her last at 8.25 am, attending doctor Subir Mondal had said.



Suchitra Sen's presence was felt everywhere: Sharmila Tagore

Monday, 13 January 2014

Electrical engineering interview question and answer set 2

Why star delta starter is preferred with induction motor?


Star delta starter is preferred with induction motor due to following reasons:

• Starting current is reduced 3-4 times of the direct current due to which voltage drops and hence it causes less losses.

• Star delta starter circuit comes in circuit first during starting of motor, which reduces voltage 3 times, that is why current also reduces up to 3 times and hence less motor burning is caused.

• In addition, starting torque is increased and it prevents the damage of motor winding.


State the difference between generator and alternator


Generator and alternator are two devices, which converts mechanical energy into electrical energy. Both have the same principle of electromagnetic induction, the only difference is that their construction. Generator persists stationary magnetic field and rotating conductor which rolls on the armature with slip rings and brushes riding against each other, hence it converts the induced emf into dc current for external load whereas an alternator has a stationary armature and rotating magnetic field for high voltages but for low voltage output rotating armature and stationary magnetic field is used.


Why AC systems are preferred over DC systems?


Due to following reasons, AC systems are preferred over DC systems:

a. It is easy to maintain and change the voltage of AC electricity for transmission and distribution.

b. Plant cost for AC transmission (circuit breakers, transformers etc) is much lower than the equivalent DC transmission

c. From power stations, AC is produced so it is better to use AC then DC instead of converting it.

d. When a large fault occurs in a network, it is easier to interrupt in an AC system, as the sine wave current will naturally tend to zero at some point making the current easier to interrupt.


How can you relate power engineering with electrical engineering?


Power engineering is a sub division of electrical engineering. It deals with generation, transmission and distribution of energy in electrical form. Design of all power equipments also comes under power engineering. Power engineers may work on the design and maintenance of the power grid i.e. called on grid systems and they might work on off grid systems that are not connected to the system.


What are the various kind of cables used for transmission?


Cables, which are used for transmitting power, can be categorized in three forms:

• Low-tension cables, which can transmit voltage upto 1000 volts.

• High-tension cables can transmit voltage upto 23000 volts.

• Super tension cables can transmit voltage 66 kV to 132 kV.


Why back emf used for a dc motor? highlight its significance.


The induced emf developed when the rotating conductors of the armature between the poles of magnet, in a DC motor, cut the magnetic flux, opposes the current flowing through the conductor, when the armature rotates, is called back emf. Its value depends upon the speed of rotation of the armature conductors. In starting, the value of back emf is zero.


What is slip in an induction motor?


Slip can be defined as the difference between the flux speed (Ns) and the rotor speed (N). Speed of the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as a percentage of synchronous speed (Ns) and represented by the symbol ‘S’.


Explain the application of storage batteries.


Storage batteries are used for various purposes, some of the applications are mentioned below:


• For the operation of protective devices and for emergency lighting at generating stations and substations.

• For starting, ignition and lighting of automobiles, aircrafts etc.

• For lighting on steam and diesel railways trains.

• As a supply power source in telephone exchange, laboratories and broad casting stations.

• For emergency lighting at hospitals, banks, rural areas where electricity supplies are not possible.





Total Post Views : 10


The post Electrical engineering interview question and answer set 2 appeared first on Satbi.


Electrical engineering interview question and answer set 2

Why star delta starter is preferred with induction motor?


Star delta starter is preferred with induction motor due to following reasons:

• Starting current is reduced 3-4 times of the direct current due to which voltage drops and hence it causes less losses.

• Star delta starter circuit comes in circuit first during starting of motor, which reduces voltage 3 times, that is why current also reduces up to 3 times and hence less motor burning is caused.

• In addition, starting torque is increased and it prevents the damage of motor winding.


State the difference between generator and alternator


Generator and alternator are two devices, which converts mechanical energy into electrical energy. Both have the same principle of electromagnetic induction, the only difference is that their construction. Generator persists stationary magnetic field and rotating conductor which rolls on the armature with slip rings and brushes riding against each other, hence it converts the induced emf into dc current for external load whereas an alternator has a stationary armature and rotating magnetic field for high voltages but for low voltage output rotating armature and stationary magnetic field is used.


Why AC systems are preferred over DC systems?


Due to following reasons, AC systems are preferred over DC systems:

a. It is easy to maintain and change the voltage of AC electricity for transmission and distribution.

b. Plant cost for AC transmission (circuit breakers, transformers etc) is much lower than the equivalent DC transmission

c. From power stations, AC is produced so it is better to use AC then DC instead of converting it.

d. When a large fault occurs in a network, it is easier to interrupt in an AC system, as the sine wave current will naturally tend to zero at some point making the current easier to interrupt.


How can you relate power engineering with electrical engineering?


Power engineering is a sub division of electrical engineering. It deals with generation, transmission and distribution of energy in electrical form. Design of all power equipments also comes under power engineering. Power engineers may work on the design and maintenance of the power grid i.e. called on grid systems and they might work on off grid systems that are not connected to the system.


What are the various kind of cables used for transmission?


Cables, which are used for transmitting power, can be categorized in three forms:

• Low-tension cables, which can transmit voltage upto 1000 volts.

• High-tension cables can transmit voltage upto 23000 volts.

• Super tension cables can transmit voltage 66 kV to 132 kV.


Why back emf used for a dc motor? highlight its significance.


The induced emf developed when the rotating conductors of the armature between the poles of magnet, in a DC motor, cut the magnetic flux, opposes the current flowing through the conductor, when the armature rotates, is called back emf. Its value depends upon the speed of rotation of the armature conductors. In starting, the value of back emf is zero.


What is slip in an induction motor?


Slip can be defined as the difference between the flux speed (Ns) and the rotor speed (N). Speed of the rotor of an induction motor is always less than its synchronous speed. It is usually expressed as a percentage of synchronous speed (Ns) and represented by the symbol ‘S’.


Explain the application of storage batteries.


Storage batteries are used for various purposes, some of the applications are mentioned below:


• For the operation of protective devices and for emergency lighting at generating stations and substations.

• For starting, ignition and lighting of automobiles, aircrafts etc.

• For lighting on steam and diesel railways trains.

• As a supply power source in telephone exchange, laboratories and broad casting stations.

• For emergency lighting at hospitals, banks, rural areas where electricity supplies are not possible.



Electrical engineering interview question and answer set 2

Thermodynamic Equations and Examples

The following are common thermodynamic equations and sample problems showing a situation in which each might be used.



Work and Transfer of Heat and Energy


q=mCsΔT

q is the heat in J

m is the mass in g

Cs is the specific heat in J/g/C

ΔT is the change in temperature in Kelvin or Celsius


Example:


How many liters of water can 6.7 L of ethane boil? The initial temperature and pressure of the ethane and water is .95 bar and 25˚C.


First we must find the amount of heat released by the ethane. To do this, we calculate the number of moles of ethane gas using the ideal gas equation and multiply the molar heat of combustion by the number of moles.


ΔHcombustion= 1437.17 kJ/mol

n=PV/RT

n=[.95*6.7]/[.08314*298]

n=.2569mol
Heat released by ethane:  (1437.17 kJ/mol)*.2569mol = 369.21 kJ


Then using the heat equation we can find the mass of water that would be raised to boiling with the given amount of heat. First, the kJ must be converted to J to match the units of the specific heat.


369210=(m)(4.184)(373-298)


Using basic algebra we solve for the mass, and since water has a density of 1.0g/cm3, the mass will be equal to the volume.


M=1176.58g


Volume of water: 1.177 L


W = -Δ(PV)

W = Work

P = Pressure

V = Volume


Example:


A balloon filled with gas expands under a constant pressure of 2.0*10^5 Pascals from a volume of 5.0 L to 10 L.


Work = -Pressure * ΔVolume = -pressure * (Vfinal – Vinitial) = -2*10^5 * (10-5) Pa*L = -10^6 Pa*L = -1000 J = -1.0 kJ of work


E = q + W


E = Energy

q = Heat

W = Work


Example:


A balloon filled with gas expands its volume by 2.0 L. If the pressure outside the balloon is 0.93 bar and the energy change of the gas is 450J, how much heat did the surroundings give the balloon?


Remember that 1 Joule = 100 L*bar.

450=q-(0.93)*(2.0)*100


Now solving for q:

q=636 J


P=V2/R

P is power is units of J/s

V is voltage in volts

R is resistance in Ω


Example:


There is a house hold heater that operates at 4 V and at 35 Ω and is used to heat up 15g of copper wire. The specific heat capacity of copper is 24.440 J/mol/K. How much time is required to increase the temperature from 25˚C to 69˚C?


It is important to know the equation in circuitry that calculates power: P=V2/R, which is derived from the equation V=IR. We will also be using q=mCsΔT.


P=(4)2/35=.457 J/s

q=(15)(24.440)(69-25)= 16130.4 J


We now know how many joules of heat must be added to the copper wire to increase the temperature and we know how many joules of energy are given off by the heater per second. We divide to find the number of seconds.


(16130.4 J)/(.457 J/s) = 35296.3 seconds


Ln(P1/P2)=[ΔHVap/R]*[(1/T2)-(1/T1)]

P1 and P2 refer to the pressures in any unit (bar, atmosphere, Pascal)

R is the gas constant that correlates with the pressure and temperature units used.

T1 and T2 refer to the temperatures in Kelvin


Example:


If the temperature of a water bath closed system is raised from room temperature to 65˚C and the initial pressure is 350 torr, what is the final pressure of the system? The heat of vaporization of liquid water is 43.99 kJ/mol.


The gas constant that is most convenient to use is 8.314 J/K/mol. Therefore it is important to convert the kJ value of the heat of vaporization to J.


Ln(350/P2)=[43990/8.314]*[(1/298)-(1/338)]


Using basic algebra and the knowledge that e^ln(x)=x, we can solve for P2.


P2 = 42.8 torr


Ɛ=[TH-TL]/TH = w/q

Ɛ=efficiency

TH and TLare the temperatures in Kelvin

w is work in J

q is heat in J


Example:


Given a carnot engine that absorbs 750J of energy from a tank of hot water with a final temperature of 300K, what is the initial temperature if 600J of work was done by the system?


Set the two equivalent expressions equal to one another:


[TH-300]/TH = 600/750


Using basic algebra solve for the initial temperature.

TH =1500K


PV=nRT

P is Pressure

V is Volume

n is the number of moles present in the sample

R is the gas constant

T is temperature in Kelvins


Using the ideal gas law and knowing four of the five variables, it is possible to solve for the fifth variable. It is important to note that while the equation is mostly correct, but it is only perfectly accurate if the gas is ideal.


Example Problem:


If 0.2 moles of hydrogen gas occupies an inflexible container with a capacity of 45 mL and the temperature is raised from 25 C to 30 C, what is the change in pressure of the contained gas, assuming ideal behavior?


In order to find a change in pressure, you must first find both your initial pressure, P1 and your final pressure, P2, than you must subtract P1 from P2. This can be simplified by solving the ideal gas law for

pressure, then subtracting initial conditions from final conditions.


P=nRT/V  or ΔP=nR(T2-T1)/(V)


Since R (0.008314 Lbar/molK) is in liters and Kelvins, you must first divide 45 by 1000 to convert to mL, for a value of 0.045 L. Then convert both temperatures to Kelvins by 273.15, for values of 298.15 K and 303.15 K. Or, since the degree size is the same in Celsius and Kelvin, you can simply subtract 25 from 30 for a difference of 5 C= 5 K.


ΔP=[0.2mol(0.008314Lbar/molK)(5 K)]/0.045L=0.184756 bar



 




Enthalpy


ΔH = Σ [Products - Reactants]

ΔH = Change in Enthalpy (for reaxion)

Products = elements and compounds on the right side of the chemical equation

Reactants = elements and compounds on the left side of the chemical equation


Example:


Using standard thermodynamic values, calculate the change in enthalpy of reaction (ΔHrxn) in the formation of liquid water from Hydrogen and Oxygen gas.


Chemical Equation:

H2(g) + ½O2(g) => H2O(l) + heat


Product:

ΔHf H2O(l) = -285.83 kJ/mol
Reactants:

ΔHf H2(g) = 0 kJ/mol (the ΔHf of elements in their standard state is defined to be 0 kJ)

ΔHf O2(g) = 0 kJ/mol x 2


Use ΔH of formation (Hf) for each of the chemicals involved in the reaction found in a standard table or reference book.


[(ΔHf H2O = -285.83 kJ/mol)] – [(½)*(ΔHf O2 = 0 kJ/mol) + (ΔHf H2 = 0 kJ/mol)]

ΔHrxn = SUM [(-285.83 kJ) – ((½)*0 kJ + 0 kJ)] = -285.83 kJ/mol


Example:


Using standard thermodynamic values, calculate the enthalpy of the reaction of the combustion of methane gas with oxygen gas to form carbon dioxide and liquid water.


Chemical Equation:

CH4(g) + 2 O2(g) => CO2(g) + 2 H2O(l) + heat


Products:

ΔHf H2O(l) = -285.83 kJ/mol x 2

ΔHf CO2(g) = -393.51 kJ/mol
Reactants:

ΔHf CH4(g) = -74.87 kJ/mol

ΔHf O2(g) = 0 kJ/mol x 2

Use values found in a standard table or reference book


[2*(ΔHf H2O(l) = -285.83 kJ/mol) + ΔHf CO2(g) = -393.51 kJ/mol]

- [2*(ΔHf O2 = 0 kJ/mol) + (ΔHf CH4 = -74.87 kJ/mol)] =

ΔHrxn = [2*(-285.83 kJ) + (-393.51 kJ)] – [(2*0 kJ) + (-74.87 kJ)] = -890.3 kJ/mol


Example:


How much heat is released when burning 0.5 kg of liquid rubbing alcohol (2-propanol)? Products are carbon dioxide and liquid water. Assume an excess of oxygen.


Chemical Reaction:

2 C3H8O(l) + 9 O2(g) => 6 CO2(g) + 8 H2O(l) + heat

0.5 kg propanol * (1 mol / 60.084 g) = 8.3 mols propanol


ΔHrxn = [products - reactants] = [6*(-393.51 kJ) + 8*(-285.83 kJ)] – [2*(-318.2 kJ) + 9*(0 kJ)]

= -4011.3 kJ / (2 mols propanol) = -2005.65 kJ/mol

-2005.65 kJ/mol * 8.3 mols = -16646 kJ released as heat



Entropy


ΔS Universe= ΔS Surroundings –ΔS System

ΔS is the change in entropy


Example:


If 1.6g of CH4 reacts with oxygen gas to form water and carbon dioxide what is the change in entropy for the universe?


Reaction Equation:


CH4 + 2O2 -> CO2 + 2H2O


To solve this problem the following equations are also necessary:


ΔS System =ΣΔSProducts – ΣΔSReactants

ΔS System =[(.21374 kJ/mol)+(2* .06995 kJ/mol)]-[(2*.20507 kJ/mol)+( .18626 kJ/mol)] = -.24276 kJ/mol


ΔH System =ΣΔHProducts – ΣΔHReactants

ΔH System = [( -393.509 kJ/mol)+(2* -285.83 kJ/mol)]-[(2*0)+( -74.87 kJ/mol)] = -890.229 kJ/mol


ΔS Surroundings =ΔHSystem /T

ΔS Surroundings = -890.229/298 = -2.9873 kJ/mol


ΔS Universe= ΔS Surroundings –ΔS System

ΔS Universe= -2.9873 kJ/mol – (-.24276 kJ/mol) = -2.745 kJ/mol


ΔS=k*ln(w)

ΔS is the change in entropy

K is the Boltzmann constant in J/K/particle

W is the number of microstates possible


Example:


The volume of a gas starts at 5.0 L at a temperature of 400K and a pressure of 1.12 bar. If the change in entropy was .787 J/K/mol, what was the final volume of the gas?


Remember that the number of microstates is proportional to the volume of an ideal gas. Also, the Boltzmann constant is by particle; multiplying the gas constant by the number of moles of gas is equivalent.


ΔS=Rn*ln(V2/V1)

R is the gas constant

n is the number of moles of gas

V2 and V1 are volumes in L


.787=(8.314)n*ln(V2/5.0)


Now to find the number of moles we use the ideal gas law:


n=PV/RT

n= [1.12*5.0]/[8.314*400]


Now plug this value into the original equation and solve for the final volume.


V2=8.77 L


ΔSRxn =ΣΔSProducts – ΣΔSReactants


Example:


Calculate the change in entropy for the decomposition of HCl(aq) to H+ and Cl-.


First, you must write out the full equation.


HCl(aq) -> H+(aq) + Cl-(aq)


Next, look up each compound in a thermodynamic table and plug the values into the equation.

ΔS System=(56.6+ 0) J/molK- 186.9 J/molK = -130.3 J/molK


 




Gibbs Free Energy


 


K=e^[-ΔG/RT]

K is the equilibrium constant

e is the numerical value 2.718

ΔG is the change in Gibbs free energy in J/mol

R is the gas constant

T is the temperature in K


Example:


What is the equilibrium constant for the formation of N2O4 gas from NO2 gas molecules? The temperature of the reaction is 310.5K.


First, the balanced equation must be written:


2NO2(g) -> N2O4(g)


Now using thermodynamic values for enthalpy and entropy, the Gibbs free energy can be calculated.


ΔG=ΔH-TΔS


Remember that the total change in enthalpy or entropy is the sum of the change in enthalpies/entropies of the products minus the sum of the change in enthalpies/entropies of the reactants.


ΔH System =ΣΔHProducts – ΣΔHReactants

ΔH System=[9.08 kJ/mol]-[2*33.1 kJ/mol] = -57.12 kJ/mol


ΔS System =ΣΔSProducts – ΣΔSReactants

ΔS System=[.30438 kJ/molK]-[2*.24004 kJ/molK] = -.1757 kJ/molK


Also, the T used is not room temperature, but the temperature given in the problem – the temperature at which the reaction takes place.


ΔG=ΔH-TΔS

ΔG=[-57.12]-310.5[-.1757] = -2.565 kJ/mol  or -2565 J/mol


Once ΔG is calculated the original equation can be used to solve for k.


K=e^[-ΔG/RT]

K=e^[2565/8.314*310.5] = 2.701


 ΔG=ΔH-TΔS

ΔG is the change in Gibbs Free Energy for the reaction

ΔH is the change in enthalpy for the reaction

ΔS is the change in entropy for the reaction

T is the temperature in Kelvin


Example:


Find the change is Gibbs Free Energy for the reaction of hydrochloric acid and sodium hydroxide to form liquid water and sodium chloride at 31 C.


First you must write the chemical equation for the reaction: HCl(aq) + NaOH(aq) ->H2O(l) + NaCl


Next, you must calculate ΔH and ΔS for the reaction.


ΔHRxn =ΣΔHProducts – ΣΔHReactants

ΔHRxn =[(-285.8+(-411.54)) kJ/mol]-[(-167.16 + (-470.1) kJ/mol] = -60.05 kJ/mol


ΔSRxn =ΣΔSProducts – ΣΔSReactants

ΔSRxn=[(0.06991+ 0.07238) kJ/molK]-[0.0565 + 0.0482 kJ/molK] = 0.03759 kJ/molK


Next, the temperature must be converted to Kelvins by 273.15 to its value in Celsius.


31 + 273.15 = 304.15 K


Finally, all of these values are plugged into the original equation, ΔG=ΔH-TΔS.


ΔG= -60.05 KJ/mol – 304.15 K *(0.03759 KJ/molK) = -71.4830


There also exist equations to correct for the temperature dependence of ΔH and ΔS, but they are not commonly used because the difference is typically very slight. They are:


ΔH(T)=ΔH˚rxn - ʃCpdT

ΔS(T)=ΔS˚rxn - ʃCpdT


If you need to use these equations for an extremely accurate value of ΔG, simply solve these equations for ΔH and ΔS, then plug these values into the equation for ΔG.




Thermodynamic Equations and Examples

concept of continuum in thermodynamics

Matter exists in molecules. Solids have a great cohesive forece and hence they take any shape. Liquids also have good cohesive force between molecules but not as much as solids. They take the shape of the container. Gases have very less cohesive force between molecles and hence they move randomly in the given volume. In the Macroscopic point of view, the volumes considerd are large compared to the size of the molecules. It is assumed that the volume under consideration will have enough number of molecules present in it, so that the definition of density, mass etc. will not alter eventhough there is movement of molecules in and out.

 


Let us take a volume comparable to the volume of molecule. Due to the random movement of the molecules, the volume under consideration may have molecules  at an instant and may not have molecules at another instant.  In that case, the definition of density and other properties will not have any meaning.  Hence it is considered that, always, the volume under consideration will have many molecules so that the definition of the properties will have meaning.


concept of continuum in thermodynamics

Sunday, 12 January 2014

LinkedIn Sues To Halt Use of Phony IDs To Steal Job Ads

LinkedIn Sues To Halt Use of Phony IDs To Steal Job Ads


A bunch of John Does may find themselves with some explaining to do if a California federal court sides with LinkedIn in a lawsuit filed this week. Then again, the professional networking super site may just be trying to make a point.


Miffed about the proliferation of fake profiles seemingly intended to connect to real job-seekers employers for the sake of mining their data Relevant Products/Services, Mountain View, Calif.-based LinkedIn is seeking injunctive relief and damages against 10 anonymous defendants.


According to the court papers, LinkedIn believes the group members are people “employing various automated software programs” known as bots to register fake LinkedIn accounts and use a practice called “scraping,” in violation of the network’s terms of service, and circumventing various “technical protection barriers” employed by LinkedIn. It’s also a violation of federal and California computer Relevant Products/Services laws and federal copyright laws.


Undermining LinkedIn’s Credibility


This behavior undermines the “integrity and effectiveness of LinkedIn’s professional network,” the company claims.


It’s the latest salvo in the war against people who essentially encounter no resistance in using social media either for frivolous or nefarious purposes. Twitter has seen users create fake accounts in the name of celebrities or major corporations and send out damaging Tweets, while Facebook has admitted that it has thousands of fake user accounts that it says it is trying to delete. Facebook last September won a $3 million judgment against a company found by a judge to have sent more than 60,000 spam messages to Facebook members, and Craigslist also won a victory against a company it said ripped off its real estate ads.


With a more serious purpose, LinkedIn can perhaps less afford to have users who damage its business model.


Too Savvy To Be Sued?


The question is whether LinkedIn or the court can actually unmask the hackers. The company declares its intention to seek “expedited discovery” to learn their identities and reserves the right to amend the complaint.


“While a court case is one step to address this issue of fake accounts on Linked In, LinkedIn’s lawsuit won’t necessarily identify the scammers behind the scheme if the accused are savvy at all and cover their tracks,” said Chester Wisniewski, a senior analyst at the global cybersecurity firm Sophos.


“It appears they were abusing Amazon’s EC2 service, which is often abused by spammers and others,” Wisniewski told us. “Most criminals are smart enough not to use their real identities and stick to using stolen credit cards to pay for the service.”


LinkedIn, which claims 259 million members in 200 countries, says in the court papers it acted quickly to halt the defendants’ activities by disabling the phony accounts and “implementing additional technical protection barriers.” It hopes the court action will bring “permanent injunctive relief halting their unlawful conduct.”


The alleged hackers could not be reached for comment.



LinkedIn Sues To Halt Use of Phony IDs To Steal Job Ads

4 Ways To Keep Your E-Commerce Site Safer in 2014

So you made it through the holiday shopping season without a scuffle. You watched Target get hit with one of the largest-ever data Relevant Products/Services breaches and you’ve seen the predictions for the rising security threats in 2014 — and they are kind of scary.


The big question is, how will you make your site more secure for the new year?


There’s lots at stake. According to the National Retail Federation, every single hour of downtime due to a Web site outage or a malicious attack can have significant impact on your reputation and revenue. VeriSign figures even a few minutes of downtime can lead to financial losses in the tens of thousands of dollars, not to mention customer Relevant Products/Services frustration.


“With the stakes so high, Internet retailers need to adopt a 360-degree approach to security during the holiday season, and year-round ideally,” said Sean Leach, vice president of Strategy and Technology for the VeriSign Network Intelligence & Availability Group. He offered four tips on keeping your site secure.


Prepare for the Worst, Plan for the Best


To ensure Web site availability and security, Leach said online retailers need to prepare for the worst through escalation and incident response planning by outlining standard operating procedures for downtime, including establishing and training incident-response teams


“They should also monitor their site diligently to determine service health and identify anomalies quickly and accurately, as well as provide failover to back-up IP addresses to ensure the site is always available,” Leach said.


Improve Your Infrastructure


Leach recommends optimizing the scalability and performance of your Internet infrastructure with demonstrated management of the increased traffic load coming your way during the holiday shopping season.


“Whether you manage your site internally or through a vendor, a track record of maintaining satisfactory service levels during the rest of the year may not be a reliable indicator that service levels can be maintained during the peak holiday traffic season,” he said. “If scalability and performance of your infrastructure are not optimized, it could damage your sales revenue and reputation at the worst possible time.”


Don’t Forget About DDoS


With the increase in size and complexity of distributed denial of service (DDoS) attacks, Leach said companies should consider leveraging upstream service providers to protect both Web servers and DNS.


“If either goes down, a company could be out of business,” he said. “A cloud Relevant Products/Services-based approach to both DNS management and DDoS protection provides a cost-effective alternative to maintaining uptime.”


Implement Security Best Practices


Finally, implement security best practices by partnering with a security provider for holistic support. Leach pointed out that not all e-commerce Relevant Products/Services sites can develop an internal cyberintelligence Relevant Products/Services capability.


“Security service providers can help to quickly identify and understand the various security incidents and their implications, determine effective mitigation and remediation tactics, and develop a clear plan to enhance security,” he said. “Delivered via the cloud, such services combine fully reliable DNS resolution and DDoS attack protection to support critical Web-based systems and reduce the risk of downtime.”



4 Ways To Keep Your E-Commerce Site Safer in 2014

T-Mobile Makes Changes, But Are They Enough To Lead?

T-Mobile Makes Changes, But Are They Enough To Lead?


T-Mobile still has a lot of work to do if it is going to match the number of customers that AT&T and Verizon currently have. The aspiring carrier may have made it easier for people to switch to its network by paying off ETFs, but at the same time, AT&T has responded in kind.


 


Sprint and T-Mobile have been languishing in the carrier wars for more than four years, while Verizon and AT&T have dominated the industry, primarily due to better phones, service, and pricing. But now, things may be changing.


Since becoming CEO in 2012, T-Mobile’s John Legere has implemented a variety of corporate changes, and now it’s time to see if these changes are enough to propel T-Mobile into the same league as AT&T and Verizon.


A Great Offer


In the early phases of its so-called “UnCarrier” positioning, T-Mobile removed annual service contracts, something that its competitors are now showing interest in, as well. While this strategy is not new in the international scene, annual contracts in the U.S. have been an integral part of the industry for many years.


After dropping the requirement for annual contracts, T-Mobile attracted a record-breaking number of new customers in 2013. Even though it still controls just a small portion of the overall market, in late 2013 T-Mobile added more users than any of its competitors.


To sway even more customers away from AT&T and Verizon over the coming years, T-Mobile has offered to pay off some early termination fees its new customers may incur in switching over to T-Mobile. Specifically, the carrier will offer new customers $350 per line switched over to T-Mobile, as well as up to $300 for trading in their current phone. So, are you tempted to make the switch?


Is It Enough?


T-Mobile still has a lot of work to do if it is going to match the number of customers that AT&T and Verizon currently have. The aspiring carrier may have made it easier for people to switch to its network by paying off ETFs, but at the same time, AT&T has responded in kind. AT&T is also now offering deals to pay off some early termination fees, thereby making it easier for people to leave T-Mobile, as well.


Since many of aspects of T-Mobile’s UnCarrier plans have been tried in other countries, or were considered in the U.S. several years ago, some feel that T-Mobile’s customer Relevant Products/Services acquisition strategies are not innovative enough.


“I think T-Mobile should be given credit for starting to grow once again after years of decline, but I don’t think they are responsible for any of the industry changes. They would happen with or without T Mobile,” said industry analyst Jeff Kagan.


In fact, AT&T and Verizon still control 70% of the mobile Relevant Products/Servicesservices market, and the most important factor in customers’ choice of carriers appears not to be financial incentives, but quality of service. That remains T-Mobile’s biggest outstanding challenge at this point in its quest to compete with the market leaders.


 


 



T-Mobile Makes Changes, But Are They Enough To Lead?

Nanoparticles cause cancer cells to die and stop spreading

Leukocytes are WBCs and liposomes are nanoparticles.


More than nine in ten cancer-related deaths occur because of metastasis, the spread of cancer cells from a primary tumour to other parts of the body. While primary tumours can often be treated with radiation or surgery, the spread of cancer throughout the body limits treatment options. This, however, can change if work done by Michael King and his colleagues at Cornell University, delivers on its promises, because he has developed a way of hunting and killing metastatic cancer cells.


When diagnosed with cancer, the best news can be that the tumour is small and restricted to one area. Many treatments, including non-selective ones such as radiation therapy, can be used to get rid of such tumours. But if a tumour remains untreated for too long, it starts to spread. It may do so by invading nearby, healthy tissue or by entering the bloodstream. At that point, a doctor’s job becomes much more difficult.


Cancer is the unrestricted growth of normal cells, which occurs because mutations in normal cell cause it to bypass a key mechanism called apoptosis (or programmed cell death) that the body uses to clear old cells. However, since the 1990s, researchers have been studying a protein called TRAIL, which on binding to the cell can reactivate apoptosis. But so far, using TRAIL as a treatment of metastatic cancer hasn’t worked, because cancer cells suppress TRAIL receptors.


When attempting to develop a treatment for metastases, King faced two problems: targeting moving cancer cells and ensuring cell death could be activated once they were located. To handle both issues, he built fat-based nanoparticles that were one thousand times smaller than a human hair and attached two proteins to them. One is E-selectin, which selectively binds to white blood cells, and the other is TRAIL.


He chose to stick the nanoparticles to white blood cells because it would keep the body from excreting them easily. This means the nanoparticles, made from fat molecules, remain in the blood longer, and thus have a greater chance of bumping into freely moving cancer cells.


There is an added advantage. Red blood cells tend to travel in the centre of a blood vessel and white blood cells stick to the edges. This is because red blood cells are lower density and can be easily deformed to slide around obstacles. Cancer cells Have a similar density to white blood cells and remain close to the walls, too. As a result, these nanoparticles are more likely to bump into cancer cells and bind their TRAIL receptors.


King, with help from Chris Schaffer, also at Cornell University, tested these nanoparticles in mice. They first injected healthy mice with cancer cells, and then after a 30-minute delay injected the nanoparticles. These treated mice developed far fewer cancers, compared to a control group that did not receive the nanoparticles.


“Previous attempts have not succeeded, probably because they couldn’t get the response that was needed to reactivate apoptosis. With multiple TRAIL molecules attached on the nanoparticle, we are able to achieve this,” Schaffer said. The work has been published in the Proceedings of the National Academy of Sciences.


While these are exciting results, the research is at an early stage. Schaffer said that the next step would be to test mice that already have a primary tumour.


“While this is an exciting and novel strategy,” according to Sue Eccles, professor of experimental cancer therapeutics at London’s Institute of Cancer Research, “it would be important to show that cancer cells already resident in distant organs (the usual clinical reality) could be accessed and destroyed by this approach. Preventing cancer cells from getting out of the blood in the first place may only have limited clinical utility.”


But there is hope for cancers that spend a lot of time in blood circulation, such as blood, bone marrow and lymph nodes cancers. As Schaffer said, any attempt to control spreading of cancer is bound to help. It remains one of the most exciting areas of research and future cancer treatment.



Nanoparticles cause cancer cells to die and stop spreading

GSAT-14 moved up further

Ground controllers of the Indian Space Research Organisation raised the elliptical orbit of the GSAT-14 again on Tuesday to make it more circular.


 


They sent out commands from the Master Control Facility at Hassan, Karnataka, to the satellite’s propulsion system or Liquid Apogee Motor (LAM). The LAM engine crackled for 44 minutes, raising the orbit to a perigee of 32,160 km and an apogee of 35,745 km. The inclination achieved was 0.6 degree. When it was first fired on Monday, the LAM helped the 1,982-kg satellite realise an orbit of 8,966 km (perigee) by 35,744 km (apogee).


 


M. Nageswara Rao, Project Director, GSAT-14, said that during the third orbit-raising manoeuvre scheduled for January 9, the LAM would be fired for 193 seconds. If successful, the operation would take the satellite to a much more circular orbit with a perigee of 35,564 km and an apogee of 35,741 km. The inclination will be 0.25 degree.


 


Then “the satellite will slow down and move towards its final orbit. It is expected to reach the final circular orbit with an inclination of 74 degrees east on January 20. We will declare it usable by January 27.”


 


The satellite, launched by the GSLV-D5 on January 5, was in good health, Mr. Rao said. Its final, circular geo-stationary orbit would be at a height of 36,000 km.



GSAT-14 moved up further

India to launch Chandrayaan-II by 2017 after success Chandrayaan-I mission


Buoyed by the success of Chandrayaan-I mission which was meant to orbit the Moon, India will launch the second ambitious mission to land a rover there in the next two to three years.


The Chandrayaan-II will have indigenously developed rover and a lander using the Geo-Synchronous Launch Vehicle (GSLV).


“Chandrayaan-II is a mission where we essentially need to move on (lunar) surface to conduct experiments. We will launch Chandrayaan-II with an indigenous rover and lander using GSLV by 2016 or 2017,” Space Secretary K. Radhakrishnan said at a press conference here.


Chandrayaan-I, India’s first mission to Moon, was launched successfully on October 22, 2008 from Sriharikota. The spacecraft was orbiting around the Moon at a height of 100 km from the lunar surface for chemical, mineralogical and photo-geologic mapping of the Moon.


Talking about Chandrayaan-II, Mr. Radhakrishnan said a study was done to check if an indigenous lander and rover could be developed which gave a positive feedback after which the ISRO decided to go ahead with the project.


“In May 2012, we conducted a feasibility study on development of a lander and this has been completed. We find that we will be able to develop a lander in India. We need 2-3 years time,” he said.


Mr. Radhakrishnan, however, added that there were a few technological elements in a lander which need to be developed.


“First, we need to reduce the velocity of a lander as it comes for soft landing. Second, to develop the mechanism that is involved in a lander. Third, is to locate precisely where to land by taking pictures and then steering the lander to a place it has to land,” he said.


The post India to launch Chandrayaan-II by 2017 after success Chandrayaan-I mission appeared first on Satbi.