48 points | by routerl1 month ago
That's not been my experience at all: As long as the content I'm reading is at the right level, I've been able to learn to segment as my vocabulary has grown, and there's always only a few new words that I haven't learned how to recognize in-context yet. Having a good built-in dictionary wherever you're reading (e.g., a Chrome plugin, or Pleco, or whatever) has been helpful here.
My fear would be that the longer you put off learning to segment in your head, the harder it will be.
My advice for this would be that you present the text as you'd normally see it (e.g., no segmentation), but add aids to help learners see or understand the segmentation. At very least you could have the dictionary pop-up be on the level of the full segmentation, rather than individual characters; and you could consider having it so that as you mouse over a character, it draws a little line under / border around the characters in the same segment. That could allow you to give your brain the little hint it needs to "see" the words segmented "in situ".
A lot of the pain I see in foreigners learning Chinese is that they try to tackle the written language too early. Actually, I think that's sub-optimal in any language, but it's even more expensive in a language like Chinese or Thai where word segmentation isn't a part of the writing system. I totally get it since characters are cool and curiosity about them draws many learners to the language, but it's a lot easier to take on one challenge at a time!
I’ve done a fair amount of Chinese language segmentation programming - and yeah it’s not easy, especially as you reach for higher levels of accuracy.
You need to put in significant amounts of effort just for less than a few % point increases in accuracy.
For my own tools which focus on speed (and used for finding frequently used words in large bodies of text) I ended up opting for a first longest match algorithm.
It has a relatively high error rate, but it’s acceptable if you’re only looking for the first few hundred frequently used words.
What segmented are you using, or have you developed your own?
I'm using Jieba[0] because it hits a nice balance of fast and accurate. But I'm initializing it with a custom dictionary (~800k entries), and have added several layers of heuristic post-segmentation. For example, Jieba tends to split up chengyu into two words, but I've decided they should be displayed as a single word, since chengyu are typically a single entry in dictionaries.
I tried to built something similar, but what I didn't discover and think is crucial is the proper FE: yes, word segmenting is useful, but if I have to click on each word to see its meaning, for how I learn Chinese by reading texts, I still find Zhongwen Chrome extension to be more useful, since I see English meaning quicker, just by hover cursor over the word.
In my project, I was trying to display English translation under each Chinese word, which would I think require AI to determine the correct translation, since one cannot just put CC-CEDIT entry there.
P.S: I dont know how you built your dictionary, it translated 气功师 as "Aerosolist", which I am not sure what is exactly, but this should be actually two words, not one - correct segmentation and translation is 气功 师, "qigong master".
The (awful and incorrect) translation you've pointed out comes from the segmenter being too greedy, not finding the (non-existent) word in any dictionary, and therefore dispatching the word to be machine translated, without context. This is the final fallback in the segmentation pipeline, to avoid displaying nothing at all, and my priority right now is making the segmentation pipeline more robust so this rarely (or never) happens, since it sometimes produces hilariously bad results!
Also the 'clip reader' feature in Pleco is decent.
Also, supporting simplified as well as traditional might increase your potential audience.
It's already possible to switch instantly between pinyin and bopomofo, and I'm working on letting users switch between simplified/traditional, but this is also a non-trivial problem. For now, the app will follow the user's lead: if you enter traditional text, it will return traditional text, and same goes for simplified.
https://all-day-breakfast.com/chinese/
What is kind of interesting is that the script itself (a single Perl CGI script) has survived the passage of time better than the text documenting it.
Besides all the broken links, the text refers throughout to Big-5 encoding, and the form at https://all-day-breakfast.com/chinese/big5-simple.html has a warning that the popups only work in Netscape or MSIE 4. You can now ignore all of that because browsers are more encoding aware (it still uses Big-5 internally but you can paste in Unicode) and the popups work anywhere.
What that link doesn't give you is the dictionary files I used as input for the preprocessing step - which of course were also 1998 vintage. There are copies on the server (https://all-day-breakfast.com/chinese/cedict.b5_saved, https://all-day-breakfast.com/chinese/big5-PY.tit)
My Chinese got somewhat better, then a lot worse, then a little bit better again - obviously mostly to do with whether I was actually using it, which on the whole I haven't been. But back then I was really working on it and I just wanted something to help - there were a few useful resources I knew of (CEDICT obviously, and Rick Harbaugh's zhongwen.com was mindblowing at the time) and this seemed like a way to glue them together that I actually knew how to do.
Writing learning tools is obviously not the same thing as learning though.
That's not true at all; you can go a long way just by clicking on characters in Pleco, and Pleco's segmentation algorithm is awful. (Specifically, it's greedy "find the longest substring starting at the selected character for which a dictionary entry exists".)
Sometimes I go back through very old conversations in Chinese and notice that I completely misunderstood something. That's an unfortunate but normal part of the language-learning process. You don't need full comprehension to learn. What would babies do?
I'm also working on showing all the pronunciations/definitions for a given hanzi, it should be ready later this week.
That app has been invaluable as someone learning Chinese.
that app breaks down mandarin sentences into individual characters. I believe it’s made by a Taiwanese developer too.
I tried your app with a few sentences and it works really well!
Anytime I need to look up a character or word, I go to https://www.moedict.tw/ first. Pleco is still great for having so many add-ons (including dictionaries) and, from what I've heard, some decent graded readers.
I use Pleco almost every day :)