[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Limits of grep?
Then xargs won't work either, will it?
> On Tue, Sep 26, 2000, Shachar Shemesh wrote about "Re: Limits of grep?":
> > Short explanation regarding the use of wildcards in Unix commands:
> >..
> > The reason for that is precisely so that you can use this shell feature to
> > perform wildcard expansion.
> >
> > The down side of this is that the limit you have encountered is a shell limit,
> > not a grep limit. The only thing I can suggest to you is to use a program that
> > enumerates the files, and performs grep on each one. Luckily, such a program
> > exists. It is called "find".
>
> Shachar gave a good explanation and suggestions, but I would just like to
> correct one point: the limit you encountered is not actually a limit of the
> shell per se, but rather a limit of the kernel.
>
> As Shachar explained, the shell expands the wildcard into a huge list, and
> then constructs (using dynamic memory, which is NOT limited) the command line.
> The shell then calls the system call execve (see 'man execve') to run this
> command line. This kernel call has a limit - I don't know what it is in Linux,
> but from a little experiment I guess it is 128 kilobytes. If the command line
> is over this limit, the system-call fails with a E2BIG error, and the command
> is not run. The shell sees this error, and tells it to the user.
>
> For example, and to see that this has nothing to do with wildcard expansion,
> I created a file /tmp/aaa which is a 150K big line filled with about file
> names, and then run 'ls' with that line as the arguments, and get the error:
>
> bash$ ls `cat /tmp/aaa`
> sh: /bin/ls: Argument list too long
>
> In fact, *any* command, not just ls, will fail in the exact same way. But
> to see that this is *not* a shell problem, try running
>
> echo `cat /tmp/aaa`
>
> And this works! (be careful - it will scroll a lot of garbage :)). Why?
> Because (at least on bash and zsh) echo is a shell builtin, and no external
> program needs to be run, hence the huge argument list does not need to be
> passed to the kernel. The shell itself has no problems dealing with this huge
> argument list because, as I already mentioned, it uses dynamic memory.
>
> By the way, 128K (if this is indeed the limit on Linux) is actually a big
> limit. If I remember correctly, old versions of Unix only allowed 4096 bytes
> on the command line, so xargs and the likes were very useful commands.
> On Solaris, the limit is even bigger: 1048320 bytes (see ARG_MAX in limits.h).
>
>
> --
> Nadav Har'El | Tuesday, Sep 26 2000, 26 Elul 5760
> nyh@math.technion.ac.il |-----------------------------------------
> Phone: +972-53-245868, ICQ 13349191 |Long periods of drought are always
> http://nadav.harel.org.il |followed by rain.
>
> =================================================================
> To unsubscribe, send mail to linux-il-request@linux.org.il with
> the word "unsubscribe" in the message body, e.g., run the command
> echo unsubscribe | mail linux-il-request@linux.org.il
>
--
Shaul Karl <shaulka@bezeqint.net>
Donate free food to the world's hungry: see http://www.thehungersite.com
=================================================================
To unsubscribe, send mail to linux-il-request@linux.org.il with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail linux-il-request@linux.org.il