[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: URLfs ?
The way I see this implemented, is with a loopback device mounted under a
directory. A URL will have that directory prepended to be recognized. This
causes the kernel (which will have to be hacked) to pass tail of the path
to open() in libc. That contains the URL. The actual open is handled by
block-read etc inside the loopback driver. They would redirect the request
to an execve of one of the utilities mentioned, according to the protocol
request (ftp, http, man whatever) with a guard timeout etc. The utility
would run UID someone and actually get the stuff from the net (or not).
The driver would act as a wrapper and manage a local cache of retrieved
objects (would be kept as files on disk).
This sounds dimple and would probably work great, especially since the
bulk of the work (retrieveing) is left up to individual pieces of code
which already exist (more or less). Writing a Dinozilla when other
Dinoziallas exist to do the job is contrary to the Unix principles.
The only small problem is, how many of you actually made a working device
driver under Linux, let alone one that has concurrence and load
constraints and is probably spiced with deadlocks everywhere (moving data
to the kernel - loop device - and out again to and from the script is a
deadlock nightmare).
Security is not a real concern as long as the device refuses to let itself
mounted in dangerous places and to import files marked executable and
such. (I.e. won't let you run a retrieved URL in the cache). What exactly
you do with what you retrieve is your risk however.
BTW what is the security level of that Marimba/Castanet thing ? If I
understand it well, you can wind up with a complete new OS overnight if
you let it rip and the other side 'helps' you a little bit.
On Thu, 31 Jul 1997, Erez Doron wrote:
> Ira Abramov wrote:
>
> > On Thu, 31 Jul 1997, Erez Doron wrote:
> >
> > > here is a mail i sent to the one who maintains the ext2fs in the
> > kernel
> > > ( i didn't found sombody better so i mailed him )
> > >
> > > look at it, and i'd like to hear your comments
> >
> > well, URLfs is a cute idea, but I'd expect it to come from Microsoft
> > before Linux. I suspect that with all the cute stuff it can do, it's
> > also
> > a very high security hazard, and I'd never install that on my machine.
>
> security hazard , why ?
>
> > Also, handshaking with remote sites is sometimes slow or problematic,
> > over
> > half of lynx' code is dedicated to this, I'd suspect (I give lynx as
> > an
> > example because it's pure retrieval engine, and almost no interface,
> > at
> > least compared to graphic browsers) and that means a chunk of 250k at
> > the
> > least added to the kernel, with no guarantee of stability. It will be
> > pretty embarassing to have your kernel freeze because of a PPP line
> > disconnection or something... nope. suchhigh-level protocols as FTP
> > and
> > HTTP must not be handled at Ring 0.
>
> well, the kernel has to wait for the cd/floppy to spin up and it does
> not freeze then,why should the kernel freeze in this case.
>
> there is an opertuninty to use a daemon or loadble model for it,
> this will leave the kernel small if url not used, and will load the
> code once ( nowdays if you run 5 lynx and 3 ftp together, you get the
> code 5 times, instead of once, as if it was if the url block was in the
> kernel)
>
> making a urlfs will make every aplication support urls ( like mc, xfm
> kfm and amy other file manager or utility ) without rewriting or even
> recompiling the aplication.
>
> ( the urlfs idea came to me after i saw the KDE enviroment, which their
> filemaneger KFM,adresses local files, http ftp and even tar.gz and
> man pages in the same way ( try opening file man:ls and you get it
> troffed )
>
> >
> >
> > OTOH, you could have the shell support it, that's not such a big hack
> > as a
> > kernel module, for all I know you may even be able to create such a
> > macro
> > for an existing shell (zsh?), or take the sources of another, tell it
> > to
> > fire up a tiny external requester (ftpget, webcopy and others already
> > exist), each time it finds something that looks like a URL while
> > parsing...
>
> > (note:
> > if anyone has programming frenzy burning in his bones and nothing to
> > do
> > right now I have a REALLY cool idea I have no idea how to implement...
> >
> > something many linuxers will be thankfull for for years to come :-)
>
>
>
> --
>
> Regards
> Erez.
> ___ ___
> L_|_ _|_J
> ( -O> <O- )
> ___//\J +------------------------------------------+ L/\\___
> //-,\ | Erez Doron, | /,-\\
> || / \\___L U.S. Robotics Technologies, Israel J___// \ ||
> _ ''/\/ '---J Email: L---' \/\'' _
> / \ //\\. | erez@scorpio.com | .//\\ / \
> |_/\'/ || +------------------------------------------+ || \'/\_|
>
> ' ||_ _|| '
> |__) (__|
>
>
Peter Lorand Peres
------------------
plp@actcom.co.il 100310.2360 on CIS (please use Internet address for mail)
http://ourworld.compuserve.com/homepages/plp
References: