Tuesday, April 7, 2009

The brain and multifaceted language exposure

While we're on the topic of the bleeding edge of language-learning technology, an article entitled Brain Researchers Open Door to Editing Memory in today's New York Times has some information relevant to language learners regarding how the brain learns:
[B]rain cells activated by an experience keep one another on biological speed-dial, like a group of people joined in common witness of some striking event. Call on one and word quickly goes out to the larger network of cells, each apparently adding some detail, sight, sound, smell. The brain appears to retain a memory by growing thicker, or more efficient, communication lines between these cells.
My approach to language learning has always been one of multiple types of exposure. Take a new vocab word, for example. Let's say you come across it in a book. You've now got speed dial set up between that book and the word, and perhaps between the word and the sentence, paragraph, thing it was in reference to, etc. Then you look it up. Now you've got the connections built to the meaning. Let's say you later hear it in a podcast. There's another connection. An example like this would seem to fit into the paradigm they suggest: you're building thicker connections to that word, and are thus more likely to learn it. Apply that to all units of language learning—words, phrases, grammar rules, characters, pronunciation, intonation, etc.—and you can see how various exposure makes language learning easier.

A quick look at the ethical issues, and a clip from The Matrix,after the jump.

Language-learning robot?

Japan is once again leading the way in sort-of-creepy-but-still-pretty-damn-cool robots. The child-like robot below, known as CB2, has some interesting language-learning abilities:
In coming decades, [Osaka University professor Minoru] Asada expects science will come up with a "robo species" that has learning abilities somewhere between those of a human and other primate species such as the chimpanzee.

And he hopes that this little CB2 may lead the way into this brave new world, with the goal to have the robo-kid speaking in basic sentences within about two years, matching the intelligence of a two-year-old child.
Read the full article here, or just check out the video below. So far, all the robot appears to be able to say is "e" え.

Saturday, April 4, 2009

Rocket Languages' language software reviews: Astroturfing at its finest

I've mentioned before how RosettaStone's PR people seem to be everywhere, and competitor Rocket Languages doesn't appear to be taking it sitting down. In fact, they appear to be doing a full-court press, including what appears to be a pretty blatant astroturfing campaign.



Here's how Wikipedia defines astroturfing:

formal political, advertising, or public relations campaigns seeking to create the impression of being spontaneous "grassroots" behavior, hence the reference to the artificial grass, AstroTurf.
Why do I think Rocket might be astroturfing? The blatant evidence, after the jump.



Language-learning linkwrap 4/4/2009

European Parliament split over language teaching: Next time any of my fellow yanks get themselves in a tizzy regarding the use of Spanish in the U.S., just remember: it could be worse; translation costs could take up 1% of our budget. Tangential money quote: "'[P]romoting the learning of […] an international "lingua franca",' such as English, should be a 'political priority'." As if there were another international lingua franca.

Young Americans going abroad to teach: When in economic peril, teach English abroad.

Statistical language learning in neonates revealed by event-related brain potentials: Say what? Babies can learn in their sleep! I wonder when and if that wears off...

On to Z! Quirky regional dictionary nears finish: For buffs of obscure Americanisms, this book's for you.

More languages, not fewer: Professor Erin Hippolyte "regularly see[s] statistics that link world language proficiency to salaries that are 8-20 percent higher." What exactly is a "world language" anyway? I wonder if it's a West Virginia regionalism for "foreign language". Someone should check a quirky regional dictionary. I am probably proficient in one or two "world languages", so where do I apply for the raise? When are Professor Hippolyte's office hours?

The Waver's Dilemma: A lot more information on how runners communicate in English than I gave you in my post on the runners' nod. For the record, I'm personally against waiving on the grounds that it makes you break form.

Getting to Grammar: Somewhere over the rainbow, frequency grammars

I seem to have gotten myself mixed up in this big debate about learning grammar, so this post the first in an unnumbered series called "Getting to Grammar" where I lay out my strategy and respond to some of the other things banging around the language-learning blogosphere regarding grammar.

Geoff of Confessions makes a valid criticism of the state of modern grammars:
When I was in grad school, we talked about the spiral syllabus. Imagine a spiral staircase going up multiple floors: You keep coming back to the same points, but at a higher level each time. Unfortunately, conventional grammars don't do this. They typically are divided into, e.g., phonology, morphology, and syntax, with morphology broken down into nouns, adjectives, adverbs, verbs, etc. The treatments can get pretty exhaustive and the learner has to figure out how deep to dive in.
How to mediate the problem, and the as-far-as-I-know non-existant solution of a frequency grammar, after the jump.

Wednesday, April 1, 2009

How many languages can fit into your brain?

If this guess is anywhere near accurate, pretty much as many as you want:
Although we’re forced to guess because the neural basis of memory isn’t understood at this level, let’s say that one movable synapse could store one byte (8 bits) of memory. That thimble would then contain 1,000 gigabytes (1 terabyte) of information. A thousand thimblefuls make up a whole brain, giving us a million gigabytes — a petabyte — of information. To put this in perspective, the entire archived contents of the Internet fill just three petabytes.