Learning with Texts allows you to copy and paste in a body of text, note which words within that text you do and don't know, quickly look up the words you don't, and create flashcards from them. It was designed with languages that use spaces in mind (to determine where one word stops and the next begins), and works quite well with them. For languages that don't use spaces, like Japanese, it's much more cumbersome (as some elbow grease will be needed to indicate where one word stops and the next begins) but it's still a useful tool. The interface is not very intuitive and there's definitively a significant learning curve to climb before you get your sea legs, but I'd recommend breaking out your climbing gear because the price is right ($0) and there's no other free tool that does the same thing.
In fact, the only place you can do the same thing that I am aware of (to the comments if you know of another!) is LingQ. However, LingQ only allows you to input 100 terms for free; from there, you have to subscribe to get more. While I've found that LingQ is a bit more user friendly and intuitive, it's hard to beat free.
My initial approach to reviewing Learning with Texts was to simply pick some article I was reading, throw it up there, run through the process with it, and then report back in the form of a review. However, the initial article I selected was in Japanese, and it quickly became apparent that the Learning with Texts experience is going to be vastly different depending on whether you're using a language with spaces, like all major Western languages, or a language without spaces, like Japanese. As such, I also decided I'd add the text of a short comment from my blog that was written in Portuguese to test out how it works with languages that use spaces.
Adding a text. To get a text into the system: (1) sign in; (2) click on "My Texts" from the list; (3) click on "New Text…" above the table; (4) select the language, enter a title, copy and paste the text, and then click "Save and Open". Once that's done, you can start marking the words you do and don't know.
Identifying vocab in languages without spaces. Soon after Benny first announced the implementation of Learning with Texts on his site, I hopped over and added a Japanese text to the system, only to find the text was broken down character by character instead of word by word. My thought was, "Oh, damn, doesn't work with Japanese," and I set it aside for a while.
Upon deciding to give it a second go, it took some poking around in the forums to discover that, while Japanese does work, it doesn't exactly work well. Basically, the system is not designed to work with languages that don't have spaces, such as Japanese or Chinese. Generally, the system relies on spaces to tell it where one word ends and another begins, but take away the spaces and suddenly it's much more difficult to figure out what constitutes a word; to do it automatically, you'll need some kind of language-specific parser for each such language, and those are not part of the Learning with Texts package. (The forums seem to indicate that you can use things like MeCab or Kekasi to parse Japanese text for use with Learning with Texts, and if anyone could point me to an explanation of how to do that for Japanese or for Chinese it would be much appreciated.)
Learning with Texts won't find the words for you in languages that don't use spaces, but that doesn't mean you're without options. There are basically two ways to go about it. You can either manually put spaces between words before importing the text, or you can manually combine single characters into multi-character words as needed. Either one of these is cumbersome and is going to be difficult for beginners (as it's not always readily apparent where one word ends and the next begins).
Once you set the length of the word, it will appear on the right side of the screen as a new term. For each of those words that you already know, you need to select "WKn", or well-known, from the right side of the screen and click save; otherwise, it goes back to being a string of single-character terms.
Contrast this to languages that use spaces (see below), in which all words that you do not create a term for are automatically deemed to be known when you press the "I know all" button at the end, and you can see how much more cumbersome this is.
That all said, the more you use this in Japanese (or any language that doesn't use spaces), the fewer and fewer terms you will need to add. Thus, with repeated use, the burden of adding multi-character terms will continually decline.
There were several other issues I noticed when using Japanese:
- Words in a Roman script are ignored, and there didn't seem to be a way to get these recognized as terms. While this generally will avoid English words simply being used in Japanese, it will also ignore homemade Japanese terms that use Roman letters, such as OL ("office lady").
- Numbers are also ignored, even when they form part of a word; e.g., １つ hitotsu can only be made into a term as つ tsu. Like with Roman letters, there doesn't seem to be a way to include these in a term in Japanese.
- Similarly, the character 々 is ignored. This character indicates a repetition of the previous Chinese character, and thus forms an integral part of the word in question. In a two-character term using 々, you either need to limit the term to the previous character or include something after it, neither of which will be exactly right.
- There is an issue when a new term starts with an existing term of two or more characters. The text I used first contained 上手 jouzu and then 上手い umai. In order to enter umai as a term after already having entered jouzu as a term, I first needed to delete jouzu and then re-add it after entering umai. While this is a pain in a single text, it becomes pretty unworkable if the term you need to delete is in another text, as you'll need to track that term down. (This problem does not occur if the existing term consists of only one character or if the existing term is somewhere other than at the beginning of the term, which makes the behavior seem like a bug rather than an intentional feature.)
- While you can control the size of the text in the body of text itself, you can't control text size in the dictionary search field, which led to some more-complex characters being difficult to read. I remedied this by using my browser's zoom feature, but this led to me later having trouble finding the "I know all" button because it's contained in a frame of static size and will get pushed down below the visible fold when zoomed in. I was stumped as to where that button was until I remembered to zoom out.
Here's the form for adding a new language:
Ouch. If that's not smarting for some user friendliness TLC, I don't know what is.
Rather than wading into the muck to try to calibrate a dictionary to look things up properly, etc., I simply typed "Portuguese" in the language field and pressed save, which resulted in the Google translate (or "GTr") being the default dictionary.
Identifying vocab in languages with spaces. Once I got through the language-adding process, things were a breeze. Google translate worked well enough (although I only had one term that I needed to look up), and Learning with Texts easily added a bunch of terms to my "known terms" list, without any of the hassle of Japanese. So for languages that use spaces between words, marking known vocab with Learning with Texts is a cakewalk.
Inflexibility in adding terms. One annoying thing that seems to apply to any language is that you can only create terms from the text exactly as they appear in the document; if a Spanish text contains only the conjugated hablo ("I speak") and you try to edit that into the unconjugated form hablar ("to speak") as a term, you'll get an error message (which makes me wonder why editing is permitted at all). For instance, when I tried to change the Portuguese plural word esboços ("outlines") to its singular form esboço ("outline"), I got this somewhat-unclear error:
The same would of course apply to Japanese volitional forms, declined nouns in German, etc. This weakens Learning with Texts use in creating flashcards; if you already know the grammatical changes but have just come across a new word, you're still forced to create a term from whatever form happens to be in your text rather than a standard word form that might be more useful. More flexibility here would be great.
Adding vocab to a spaced-repetition system. Learning with Texts has a built in flashcard system, but I'd much rather incorporate the vocab into a full-fledged spaced-repetition system like Anki. And that's possible, although it requires a long, not-so-user-friendly set of steps to make it happen.
User friendliness. My initial impression, and one that proved true as I continued to use it, was that Learning with Texts very open source-y, in that it's full of features but the design isn't intuitive. This will mean that there's a learning curve as you figure out what obscure abbreviations like "Expr", "WKn", "Ign" and "St" mean (although on-mouse-over tooltips are helpful for those), and, as noted above, things like adding a new language and exporting to Anki are far from user friendly. I also found that there's a lot of stuff on various pages the use of which isn't clear, which resulted in me simply ignoring that stuff.
There do seem to be explanations for all this stuff if you look hard enough—some are straightforward and provided by Benny, but for others you'll need to go spelunking into the forums. It's nice to have explanations, of course, but it's even nicer not to need them.
Growing pains. Learning with Texts as hosted on Fluent in 3 Months also seems to be experiencing some growing pains. It seemed to inexplicably load very slowly a number of times, and one time it got so slow for me that I thought my internet connection had gone out, but other websites were loading fine. That time, I walked away from my computer for a while and came back to see a 404 error, although reloading the page at that point brought it up right away. Later, I found Learning with Texts completely inaccessible, but Benny was already on the case, apparently dusting off some of his programming skills. I'm sure that these are the kinds of things that will be ironed out over time, but they do make the system more of a pain to use at the moment.
Although there's plenty of room for improvement, especially with respect to languages without spaces, this is still a great, free tool for picking out the vocab you need to focus on from texts you read and then getting that vocab into your spaced repetition system. Like Lang-8, RhinoSpike, Anki, and others, this fits perfectly into my language-learning workflow and looks primed to become one of my regular language-learning tools, and I'd recommend climbing the learning curve and starting to use this tool right away, even as I look forward to that curve getting flattened.