Mercurial > noffle
view TODO @ 180:09ca6eb5c7ff noffle
[svn] * TODO,src/client.c,src/client.h,src/fetch.c,src/fetch.h,src/noffle.c:
Improve error checking during fetches. A fetch is now aborted immediately
if the connection times out or if an unexpected response arrives.
This should fix problems with articles appearing in the wrong group,
and possibly other mysterious happenings.
author | bears |
---|---|
date | Wed, 09 May 2001 12:33:43 +0100 |
parents | 7ba337dafb2c |
children | fed1334d766b |
line wrap: on
line source
------------------------------------------------------------------------------- NOFFLE Todolist ------------------------------------------------------------------------------- Urgent ------ * Has Client_connect resource leaks if it fails? * Make debug logging an option in the config file instead of using a compile time option. This makes it more comfortable for users helping on bug searches to switch on debug logging temporarily. Later ----- * Improve performance of group database. Using GDBM is a bad choice, better use a btree from the Berkeley db library in libc. This will be a good time for a redesign of the group.h interface with respect to process concurrency if the simple global locking strategy will be changed in the future. * Add "hostname" config option for setting the FQHN in generated message IDs. * Read timeout when running as server and automatically close if client does not send data for a longer time. * Implement simple filter using popen or fifos. * Make compatible to latest NNTP draft. * Improve speed of online mode: Keep connection to server open for a while * Check all in http://mars.superlink.net/user/tal/writings/news-software-authors.html (Use NOV library? Use inews for validating posted articles? ... ) * Store requested articles by group + number. This would allow to create pseudo-groups (like <groupname>.requested) that contained only fully downloaded articles in overview mode (very nice and clever idea sent in by a user, it would make using overview mode much easier). Second advantage: Noffle would work with servers that have retrieving articles by message-id disabled. * Expire should clean up empty request/outgoing directories, so they will not exists forever after a server change. * Do not log program abortion due to SIGINT, if no inconsistency can occur, (e.g. when calling 'noffle -d' to a pipe and next program terminates or pressing ^C). * Improve www page and documentation. * Keeping the content list for several lock/unlock times could lead to inconsistent results, because content list is maybe modified by pseudo articles. Check this! * Optimize NEWGROUPS (extra list?) * Add noffle query option for checking all groups, if they are still available at the remote server(s) and delete them otherwise. * In online mode, retrieve full article header from remote server if client sends a HEAD command. Presently, only the header lines from the overview are returned and the article is only retrieved on an ARTICLE or BODY command. The reason for this was that some readers (like pine) retrieve the group overview by sending lots of HEAD commands and their performance would badly suffer. On the other hand, some readers (like slrn) cache the header from a HEAD command, even if a following ARTICLE command gets more header lines, so that not all header lines are available when reading news in online mode, before the next start of the reader. But some header lines (e.g. Reply-To) are important. Maybe make the behaviour configurable. User-Wishlist ------------- * Group requesting: I'd like noffle mantain a whitelist of users who can request new subscriptions: for instance, if user mardy wants noffle to fetch headers of it.comp.os.linux, he could just post a message to noffle.control with something like this in the body: subscribe-over: it.comp.os.linux Some day far away ----------------- * Understand supersedes header (useful for reading news.answers group) * Get and execute cancel messages (read control.cancel, but use xpat to get only cancels for groups in fetchlist). Seems to be expensive (20000 headers a day, takes the remote server to search through)