First. It doesn't recognize gzip compressed page of wikipedia, maybe because i'm using proxy, or maybe it's not implemented at all? Nice to have it fixed.
I guess there should be at least an option to save images/documents (not html pages) that are linked directly from page. For example when i try save wikipedia page, there is lots of pictures, they all are ignored. To save them i need set link depth 1, but as there is lots of links to different language pages of same article, and just html page code of one page is around 100-300kb, so 20 languages(i don't need at all)*200=4mb of useless info.
This is not possible to change, since all pages are same domain - en.wikipedia or fr.wikipedia etc.
Also when i'm trying to escape saving reference links outside of wiki to some sites of same tematics, i set option "don't go different domains", and since all pictures are saved to wikimeda, that is different domain to wikipedia, and there is no option to add some "trusted domains", checking this box automatically leaves me without pictures.
So there 1) should be option to add domains i want be saved when they linked to from original page. 2)Should be an option or algorithm to save only pages of selected languages, so i set english and program doesn't waste space on french, german etc. 3)Option to save direct linked images/documents/even archived documents.
Maybe would be nice to have option to ignore links inside named tables you specify, since links mostly organized in tables and you can "disable" table with links to international language pages, or links to another sites etc.