Tuesday, April 27, 2010

Why output trumps input in language learning

OK, so I don't really think that output trumps input, but I thought I'd lead off with a contrarian title vis-à-vis Steve Kaufmann's post entitled Why input trumps output in language learning. Some amount of input necessarily needs to come before you can produce any output, but saying one trumps the other is like saying reading blogs trumps writing blogs; sure, you can learn a lot by reading blogs, but you'll only be getting your message out there once you start writing one. (And, incidentally, in either case, you'll be getting exposure to a language.)

The reason I went with a contrarian title was because, when I read Steve's post, I thought that most of his arguments for input learning could easily be changed to serve as arguments for getting into output sooner rather than later. Below I've edited Steve's post to show how easily those arguments can be turned in the other direction. I've tried to edit as little as possible. Some of the changes work better than others, and some even work surprisingly well, but they all go to my main point here, which is that early output is a good thing.

Read more... I've used red to mark text I deleted from Steve's post, while blue marks the text I added:
Some arguments in favor of output input. I am sure there are many more.
  • We need to start speaking understand before we can speak well.
  • I would rather communicate with people early understand well and stumble when I speak than communicate with people later and stumble less the reverse.
  • If we can never pratice producing intelligible phrases and do not understand the answers, our conversations will not last long.
  • Passive vocabulary is powerful, necessary, and always much larger than our active vocabulary of the words we like to use, so we need to start working on active vocabulary early and frequently.
  • The more we can write and speak understand, and the more words we can use actively have, even passively, the more interesting our interaction with the language and the more words we can acquire.
  • If we can actively use understand most of the words in a text or conversation, it is easier to pick up the words and phrases we do not yet know than if we merely understood everything passively.
  • The ability to use active acquisition of passive vocabulary through output input, is like putting the pieces of the jig-saw together. Gradually the picture of what we're trying to express becomes clearer.
  • Output Input is easy to arrange. We can speak listen and write read anywhere and anytime.
See here for some ways that the internet makes output possible from anywhere, which of course includes Steve's own LingQ.
  • Output Input is interesting, if we choose content that is meaningful to us.
  • If we develop the habit of producing output input learning, we become independent.
  • Being able to produce output Input learning makes it easy to practice review our languages, and maintain them.
  • Through producing output input learning, especially on topics we like writing and speaking about with authentic content, we learn not only the language, but many more things.
  • At any time in our output producing input learning activities, we can decide to listen speak or read write, to practice what we can produce have learned.
  • Of course we need to speak read a lot in order to speak well, but. Our progress in speaking will be smoother if we invest time in output input, and continue doing so.
  • Our interaction with any language, including our own, is mostly as listeners and readers, so we need to make extra efforts to practice producing output.
  • If we are good speakers listeners and writers readers, our output input skills will have a sound base.
One of my goals in any language learning project is to have little difficulty in conveying complex ideas to native speakers read a full length book in that language. Getting there is a powerful moment of achievement, an Everest.

I could go on....

Labels: , , , , ,

Sunday, April 25, 2010

A single workflow to make use of online language-learning tools

There are so many language-learning resources out there on the web, it's kind of tough to figure out how to make use of them all. In looking at how I'm using these tools myself, I put together the following little process to incorporate many of the language-learning tools I've been using into a single workflow:


Oh, and this workflow is completely free.

Let's walk through this, after the jump.

Read more... Start with reading and/or (but preferably and) listening to something in the target language. LingQ is all about content with both text and audio, so that's a good place to start looking, but you're hardly limited to LingQ; any recordings you can find with transcripts, unabridged audio books (including children's books), etc., will do the trick.

To the extent there's anything you don't understand in the text or audio, look it up and add it to your spaced-repetition system. Anki is my current SRS of choice, but some other popular choices are Smart.fm and Mnemosyne.

Then write something about what you read or listened to in the target language. Try to make use of whatever you needed to look up and add to your SRS, and to the extent that you need to look up anything else, add that to your SRS as well.

Then get that writing corrected. There are a number of ways to do this, but Lang-8 is my standing favorite, and italki recently implemented this feature. Again, if the corrections include things you need to look up, add them to your SRS system.

Once you've got the corrected text, record yourself speaking it and get that recording corrected by native speakers. I use Cinch and Lang-8 to accomplish this.

You've now written and read that writing. Now it's time for some plain old talking. Making use of everything you've learned thus far, record yourself saying something about the running theme and get that corrected in the same way you got the recording of your text corrected. Once again, if the corrections give you any thing that needs to go into your SRS, add it.

At this point, you should have everything you need to get in your SRS. Now go over to RhinoSpike and get native speakers to record the pronunciation of each of those words. Take those audio recordings and add them to your SRS system. From there, you just need to review your newly added items as part of your regular SRS review.

You've also got two things that you've recorded yourself: your corrected text and some plain old talking. Go to RhinoSpike again and get a recording of both from native speakers. Once you've got those recordings, add them to a playlist on iTunes and listen regularly. I'd recommend just throwing all of these recordings into a random-order playlist and listening to them in the background while doing other things. This will provide a review of all of the above.

This entire workflow can be tailored to your level. At the most basic level, you can even use children books; my kids have plenty of books that come with audio CDs in all three of their languages. But you don't necessarily need to dumb the text down; you can also just keep it short. For example, if you're just starting a language but want to read a news article, you could limit yourself to just the first paragraph. This will likely take a while, but it won't be insurmountable.

If you've got a way to make this workflow, I'd love to hear it!

Labels: , , , , , , , , , , , , , , , , , ,

Friday, April 23, 2010

This is your brain on languages.



The image you see here is a visualization (which is obviously not comprehensive) of how a given piece of information in a language might get lodged into your brain. The piece of information could be anything: a vocabulary word, a grammar rule, pronunciation, a character, etc.

Every one of those lines emanating from the piece of information connects with one kind of exposure. The more exposures you get, the more connections your brain draws to that piece of information. The more repetitions of a given kind of exposure, the stronger that exposure becomes (imagine the lines getting thicker with each exposure). The stronger and more plentiful your exposures are, the more likely you are to remember the piece of information.

Exposure to a language can be largely divided into reading, listening, writing, and speaking. It doesn't matter if an exposure is via reading/listening (i.e., input from an external source) or writing/speaking (i.e, output to an external target). These traditional ideas of "output" and "input" are both input as far as your brain is concerned.

Output is input.

Labels: , , , , , , , , , , ,

Wednesday, February 17, 2010

If there weren't so many frickin' naked dudes, ChatRoulette could be a good tool for language learning

ChatRoulette has been generating quite a bit of buzz over the past week or so. The concept is quite simple; you video chat with randomly selected people, and if you don't want to chat with any particular person, you just press F9 to get hooked up with another random person. It's the brainchild of some 17-year-old Russian kid who's now getting courted by U.S. investors.

The idea has great potential for language learning. However, before it can reach that potential, they're going to need to make a few changes.

And priority number one is getting rid of all the naked dudes.

Read more... Yes, there seems to be an abundance of dudes exposing themselves in various stages of undress. This article in the Hartford Advocate gives you a good idea of just how many there are. In short, it's definitely NSFW (and not safe for children, for that matter) and it's clearly not for the light of heart.

As you avoid the pervs, you'll also probably manage to sneak a good laugh or two in. One lady had a note posted in front of her camera that said "Your mother is watching". When I saw that (thinking in particular of all those pervs), I laughed. She removed the note and she was indeed a very maternal looking figure—a very typical "my friend's mom" type. She waved and moved on to the next stranger. Another guy was just sitting there with a big, green alien head on. He kind of cocked his head and stared, then he waved and moved on.

So there's definitely some amusement value to the site, but that's not what gets it a post on this blog. The thing that gets it on here is its potential as a language-learning tool.

I tried it out for maybe 2 or 3 hours in total, and in that time I managed to chat in Chinese, French, German, and Russian (although, as I don't speak Russian, that was limited to my bastardized interpretation of the Russian version of "Pleased to me you" in Latin letters and responding to the question "Kto vi?", which if I recall correctly means "Where are you?"). I also came across a Dutch guy, so had I any Dutch skills that would be on the list as well.

The award for the most diligent use of the website for language-learning purposes definitely goes to the Chinese. Just about every Chinese person I came across on there was looking to practice English. That of course shows the potential behind the idea; being able to get in touch quickly with a random language partner with no fuss would be a great tool.

There aren't really any other sites that do this. Livemocha, iTalki, LingQ, etc., all require you to contact the specific person you want to speak to; there's no "chat with random English speaker" feature. After using ChatRoulette, the idea definitely fell into the "Why didn't someone already think of this?" bucket.

However, as you've probably already gathered, ChatRoulette isn't anywhere near being a "no fuss" tool, but very little would actually need to be done to make the model effective for language learning. First, you'd of course need to toss out all the pervs, etc. That sort of thing doesn't seem to be an issue at all on Livemocha or other online language learning sites, so it shouldn't be a big deal to do that.

The second thing is that you'd need to be able to filter your chat partners. At a minimum, you should be able to filter by target language so you can get someone who speaks the right language for you, but being able to filter by age, interests, etc., could also be useful for finding someone interesting to talk to.

I don't have much hope that ChatRoulette itself will become a website I can recommend for language learning any time soon, but I very much look forward to language-learning websites implementing their version of the idea soon.

Links:
ChatRoulette
ChatRoulette Gets Fred Wilson’s Attention [GigaOM]
Next! [Hartford Advocate]

Labels: , ,

Wednesday, October 7, 2009

Get your foreign-language audio recordings corrected online for free

I've already given you the low-down on how to get your foreign-language writings corrected online for free. Now I'd like to turn to how to get your audio recordings corrected for free.

Unfortunately, your options here are still pretty limited. As far as I can tell, there are only two places where you can submit recordings and get them corrected by native speakers, neither of which are close to making the feature ideal: Livemocha and Lang-8.

Read more... Lang-8
  • Overview. Lang-8, based in Tokyo, is a two-person project by Yangyang Xi, CEO, and Kazuki Matsumoto, CTO, that focuses letting language learners get their texts corrected. However, with this little tip, which Lang-8 supports by making adding audio easy, you can get your audio recordings corrected as well.

  • Content. Lang-8 is set up as a journal or a blog, but you're free to post whatever you feel like posting.

  • Making corrections. Correctors can leave comments for you, explaining what you did wrong. There's no feature for them to record a message for you directly. Although they could leave a recording in the comments in the same way it can be posted in the entry's body, no one has done so yet for me.

  • Speed of corrections. Just as with text, the corrections come very rapidly. Waiting a day for corrections would be a long time to wait.

  • Correction presentation. It is up to individual correctors to apply formats: bold, strike-thru, red, and blue text. Your results will vary.

  • Languages. You can post in any language you want, and native speakers of all major languages are well represented on the site. I'd wager that it'd take longer to get corrections for less frequently studied languages, but I've not tested that hypothesis.

  • Interface. Lang-8's interface is alright; it's nothing to rave about, but it gets the job done.

  • Bottom line. I love that I can record whatever I feel like recording to Lang-8, but I don't like it takes a bunch of steps to post audio recordings and that there's no easy way to post audio recordings in the comments.
Livemocha
  • Overview. Livemocha's main product is it's Rosetta Stone-like language-learning courses, but the coolest thing it does is connect you with tons of native speakers, including through corrections of your audio recordings (see my complete review of Livemocha here).

  • Content. For audio recordings, you're supposed to read outloud a text related to your lesson; there's no discretion involved in what you're supposed to record. Learners can and sometimes do add their own audio at the beginning or the end of the recordings, but they generally follow the script. Of course, you don't have to follow the script and you can surely find flexible human users who'll correct your audio recording for you regardless of what it contains.

  • Making corrections. Correctors can easily record their own recordings in reply to your audio recording, which is the major benefit of submitting audio recordings for correction on Livemocha. Correctors also get a comment field in which they can make comments and variously format the comment text.

  • Speed of corrections. Livemocha has a very large user base, so corrections come back very quickly, certainly comparable with Lang-8.

  • Correction presentation. If there's an audio recording attached to a comment, it's readily available for you at the click of a button. Like Lang-8, it is up to individual correctors to format their textual comments. Again, your results will vary.

  • Languages. Arabic, Bulgarian, Chinese, Czech, Dutch, English, Estonian, Farsi, Finnish, French, German, Greek, Hindi, Hungarian, Icelandic, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Portuguese (Portugal), Romanian, Russian, Spanish, Turkish, Ukrainian, Urdu.

  • Interface. As far as getting audio recordings corrected goes, I've got no major complaints. The interface allows you to get the job done.

  • Bottom line. While I love that correctors can easily supply their own recordings in response to yours, I don't like that you're nominally limited to Livemocha's specified scripts.
Despite the inability of my correctors to easily supply audio recordings in their comments, I've tended to use Lang-8 more for getting my audio recordings corrected, largely due to its content flexibility. Nevertheless, there is a lot of room for improvement—whether on one of these sites or on the site of a new provider of this feature.

Labels: , , ,

Tuesday, October 6, 2009

Google's getting into the language-learning game

Google Ventures, Google's venture capital arm, has invested an "undisclosed amount" of its $100 million in EnglishCentral, Inc., an English-language learning website where learners can watch popular videos (such as a clip from Forest Gump or a Red Bull ad) and then get graded on how well they pronounce the words spoken in the videos via EnglishCentral's "unique speech recognition platform".

This investment represents nothing more than Google dipping its toe in the water of the language-learning world. Let them get in up to their ankle or knees, and we'll all think back to the quaint days when we thought Rosetta Stone was a big player in the language-learning world.

Links:
EnglishCentral
Google Ventures, Atlas back language startup EnglishCentral [Mass High Tech]
Google Ventures Invests In English Language Learning Startup EnglishCentral [paidContent.org]

Labels: , , , , , ,