diff --git a/docs/404.md b/docs/404.md index 690f4152..0b4c7741 100644 --- a/docs/404.md +++ b/docs/404.md @@ -5,7 +5,7 @@ permalink: /404.html # Oops, page not found (404 error) You probably followed a broken link to, sorry. -We can't really know what you were looking for, but you can try looking +We cannot really know what you were looking for, but you can try looking for it in the [full list of the site's pages](/SitePages). You can also try a full text search on the Squid project's web sites diff --git a/docs/ConfigExamples/Authenticate/Bypass.md b/docs/ConfigExamples/Authenticate/Bypass.md index c711513c..fd6bb343 100644 --- a/docs/ConfigExamples/Authenticate/Bypass.md +++ b/docs/ConfigExamples/Authenticate/Bypass.md @@ -143,7 +143,7 @@ This can be accomplished by using 6 configuration files: This example configuration will allow any user access to whitelisted sites without asking for identification, users in group A will be able to access sites in list A, users in group B will be able to access sites -from group B and noone will be able to access anything else. +from group B and no one will be able to access anything else. ## Advanced configuration diff --git a/docs/ConfigExamples/Authenticate/Kerberos.md b/docs/ConfigExamples/Authenticate/Kerberos.md index e250a27d..e1eed4b1 100644 --- a/docs/ConfigExamples/Authenticate/Kerberos.md +++ b/docs/ConfigExamples/Authenticate/Kerberos.md @@ -208,7 +208,7 @@ If squid_kerb_ldap is used the following steps are happening 1. Squid "login" to Windows Active Directory or Unix kdc as user \@DOMAIN.COM\>. This requires Active Directory to have an attribute userPrincipalname set to - \@DOMAIN.COM\> for the associated acount. This + \@DOMAIN.COM\> for the associated account. This is usaully done by using msktutil. ![Squid-4.jpeg](/assets/images/squid-4.jpg) diff --git a/docs/ConfigExamples/Authenticate/LoggingOnly.md b/docs/ConfigExamples/Authenticate/LoggingOnly.md index 60bafd85..cd4ba51f 100644 --- a/docs/ConfigExamples/Authenticate/LoggingOnly.md +++ b/docs/ConfigExamples/Authenticate/LoggingOnly.md @@ -21,7 +21,7 @@ hack needs to be used: Remember that http_access order is very important. If you allow access -without the "dummyAuth" acl, you won't get usernames logged +without the "dummyAuth" acl, you will not get usernames logged One of the following authentication helpers is also needed to ensure that login details are available for use when that demand is made. diff --git a/docs/ConfigExamples/Authenticate/Ntlm.md b/docs/ConfigExamples/Authenticate/Ntlm.md index 2a52c1f4..733a4c15 100644 --- a/docs/ConfigExamples/Authenticate/Ntlm.md +++ b/docs/ConfigExamples/Authenticate/Ntlm.md @@ -9,7 +9,7 @@ Winbind is a Samba component providing access to Windows Active Directory authentication services on a Unix-like operating system ## Supported Samba Releases -Samba 3 and later provide a squid-compatible authenitcation helper named +Samba 3 and later provide a squid-compatible authentication helper named `ntlm_auth` ## Samba Configuration @@ -93,7 +93,7 @@ gpasswd -a proxy winbindd_priv As Samba-3.x has it's own authentication helper there is no need to build any of the Squid authentication helpers for use with Samba-3.x -(and the helpers provided by Squid won't work if you do). You do however +(and the helpers provided by Squid will not work if you do). You do however need to enable support for the NTLM scheme if you plan on using this. Also you may want to use the wbinfo_group helper for group lookups diff --git a/docs/ConfigExamples/Authenticate/NtlmCentOS5.md b/docs/ConfigExamples/Authenticate/NtlmCentOS5.md index 01575894..0e0ce361 100644 --- a/docs/ConfigExamples/Authenticate/NtlmCentOS5.md +++ b/docs/ConfigExamples/Authenticate/NtlmCentOS5.md @@ -81,7 +81,7 @@ configure Samba, Winbind and perform the join in one step. Shutting down Winbind services: [FAILED] Starting Winbind services: [ OK ] -If Winbind wasn't running before this it can't shutdown, but authconfig +If Winbind wasn't running before this it cannot shutdown, but authconfig will start it and enable it to start at boot. The default permissions for **/var/cache/samba/winbindd_privileged** in diff --git a/docs/ConfigExamples/Authenticate/WindowsActiveDirectory.md b/docs/ConfigExamples/Authenticate/WindowsActiveDirectory.md index 57e34a79..9130bfae 100644 --- a/docs/ConfigExamples/Authenticate/WindowsActiveDirectory.md +++ b/docs/ConfigExamples/Authenticate/WindowsActiveDirectory.md @@ -78,7 +78,7 @@ authentication may fail. ## NTP Configuration -Time needs to be syncronised with Windows Domain Controllers for +Time needs to be synchronised with Windows Domain Controllers for authentication, configure the proxy to obtain time from them and test to ensure they are working as expected. @@ -165,7 +165,7 @@ use it to create our kerberos computer object in Active directory. kinit administrator -It should return without errors. You can see if you succesfully obtained +It should return without errors. You can see if you successfully obtained a ticket with: klist @@ -227,7 +227,7 @@ users will not be able to authenticate with Squid. Add the following to cron so it can automatically updates the computer account in active directory when it expires (typically 30 days). Pipe it through logger so I can see any errors in syslog if necessary. As stated -msktutil uses the default `/etc/krb5.conf` file for its paramaters so be +msktutil uses the default `/etc/krb5.conf` file for its parameters so be aware of that if you decide to make any changes in it. 00 4 * * * msktutil --auto-update --verbose --computer-name squidproxy-k | logger -t msktutil @@ -263,7 +263,7 @@ Now join the proxy to the domain. ``` net ads join -U Administrator ``` -Start samba and winbind and test acces to the domain. +Start samba and winbind and test access to the domain. ``` wbinfo -t ``` @@ -324,7 +324,7 @@ chgrp proxy /etc/squid3/ldappass.txt ## Install negotiate_wrapper Firstly we need to install negotiate_wrapper. Install the necessary -build tools on Debian intall `build-essential linux-headers-$(uname -r)` +build tools on Debian install `build-essential linux-headers-$(uname -r)` Then compile and install. ```bash diff --git a/docs/ConfigExamples/Caching/WindowsUpdates.md b/docs/ConfigExamples/Caching/WindowsUpdates.md index b9dbcbdd..d336ffad 100644 --- a/docs/ConfigExamples/Caching/WindowsUpdates.md +++ b/docs/ConfigExamples/Caching/WindowsUpdates.md @@ -43,7 +43,7 @@ requests. Particularly when large objects are involved. Default value is a bit small. It needs to be somewhere 100MB or higher to cope with the IE updates. - **[range_offset_limit](http://www.squid-cache.org/Doc/config/range_offset_limit)**. - Does the main work of converting range requests into cacheable + Does the main work of converting range requests into cachable requests. Use the same size limit as [maximum_object_size](http://www.squid-cache.org/Doc/config/maximum_object_size) to prevent conversion of requests for objects which will not cache @@ -131,7 +131,7 @@ stored in the squid cache. I also recommend a 30 to 60GB [cache_dir](http://www.squid-cache.org/Doc/config/cache_dir) size allocation, which will let you download tonnes of windows updates and -other stuff and then you won't really have any major issues with cache +other stuff and then you will not really have any major issues with cache storage or cache allocation or any other issues to do with the cache. ## Why does it go so slowly through Squid? diff --git a/docs/ConfigExamples/Chat/Signal.md b/docs/ConfigExamples/Chat/Signal.md index c85bbef1..0b45404e 100644 --- a/docs/ConfigExamples/Chat/Signal.md +++ b/docs/ConfigExamples/Chat/Signal.md @@ -40,7 +40,7 @@ connections. > :x: Note that port 80 is still too unsafe to allow generic CONNECT to - happen on it. However, Signal client often can't do initial connect + happen on it. However, Signal client often cannot do initial connect without permission CONNECT to port 80 at textsecure-service-ca.whispersystems.org. You are warned. diff --git a/docs/ConfigExamples/Chat/Skype.md b/docs/ConfigExamples/Chat/Skype.md index 7ee32fe1..5d4f1bc7 100644 --- a/docs/ConfigExamples/Chat/Skype.md +++ b/docs/ConfigExamples/Chat/Skype.md @@ -15,7 +15,7 @@ then the mentioned in the article to make it so skype clients will be able to run smooth with squid in the picture. Else then that skype in many cases will require direct access to the Internet and will not work in a very restricted networks with allow access only using a proxy. I -belive that NTOP have some more details on how to somehow make skype +believe that NTOP have some more details on how to somehow make skype work or be blocked in some cases. I recommend peeking at theri at: diff --git a/docs/ConfigExamples/ClusteringTproxySquid.md b/docs/ConfigExamples/ClusteringTproxySquid.md index fd10e238..18a03a7b 100644 --- a/docs/ConfigExamples/ClusteringTproxySquid.md +++ b/docs/ConfigExamples/ClusteringTproxySquid.md @@ -14,7 +14,7 @@ What is good about WCCP? WCCP allows web cache clustering with built in fail-over mechanism and semi auto configuration management. It gives the Network administrator quiet in mind that if something in -the cache cluster is not functioning the clients wont suffer from it. +the cache cluster is not functioning the clients will not suffer from it. WCCP can be implemented for http and other protocols. many Network administrator will implement the Web cache infrastructure close to the diff --git a/docs/ConfigExamples/ContentAdaptation/C-ICAP.md b/docs/ConfigExamples/ContentAdaptation/C-ICAP.md index 84b50711..63454f1c 100644 --- a/docs/ConfigExamples/ContentAdaptation/C-ICAP.md +++ b/docs/ConfigExamples/ContentAdaptation/C-ICAP.md @@ -163,7 +163,7 @@ Then adjust squidclamav.conf as follows: logredir 1 # Enable / disable DNS lookup of client ip address. Default is enabled '1' to - # preserve backward compatibility but you must desactivate this feature if you + # preserve backward compatibility but you must deactivate this feature if you # don't use trustclient with hostname in the regexp or if you don't have a DNS # on your network. Disabling it will also speed up squidclamav. dnslookup 0 @@ -175,7 +175,7 @@ Then adjust squidclamav.conf as follows: safebrowsing 0 # - # Here is some defaut regex pattern to have a high speed proxy on system + # Here is some default regex pattern to have a high speed proxy on system # with low resources. # # Abort AV scan, but not chained program @@ -468,7 +468,7 @@ Adjust srv_url_check.conf as follows: > :information_source: Note: Using whitelist is good idea for performance reasons. It is - plain text file with 2nd level domain names. All hostnames beyong + plain text file with 2nd level domain names. All hostnames beyond this domains will be pass. Also setup DNS cache is also great idea to improve performance. @@ -671,7 +671,7 @@ Here is also Munin plugins for C-ICAP monitoring (performance-related > :information_source: When upgrading c-icap server, you also need (in most cases) to - rebuild squidclamav to aviod possible API incompatibility. + rebuild squidclamav to avoid possible API incompatibility. > :information_source: In case of c-icap permanently restarts, increase DebugLevel in diff --git a/docs/ConfigExamples/ContentAdaptation/EcapForExifStripping.md b/docs/ConfigExamples/ContentAdaptation/EcapForExifStripping.md index 65e78e91..0a88bdf2 100644 --- a/docs/ConfigExamples/ContentAdaptation/EcapForExifStripping.md +++ b/docs/ConfigExamples/ContentAdaptation/EcapForExifStripping.md @@ -104,7 +104,7 @@ First, build and install dependencies: make -j8 make install -Make shure all shared libraries are installed. +Make sure all shared libraries are installed. > :information_source: Note: Use correct compiler full path, depending your setup. Commands @@ -145,7 +145,7 @@ Supported configuration parameters: Files with size greater than limit will be stored in temporary disk storage, otherwise processing will be done in RAM. exclude_types - List of semicolon seprated MIME types which shouldn't be + List of semicolon separated MIME types which shouldn't be handled by adapter. ## Squid Configuration File diff --git a/docs/ConfigExamples/DynamicContent/Coordinator.md b/docs/ConfigExamples/DynamicContent/Coordinator.md index 2bd3d975..c811713b 100644 --- a/docs/ConfigExamples/DynamicContent/Coordinator.md +++ b/docs/ConfigExamples/DynamicContent/Coordinator.md @@ -43,7 +43,7 @@ some of the reasons for that: - The result of a live content feed based or not on argument supplied by end user. - a CMS(Content Management System) scripts design. -- bad programing. +- bad programming. - Privacy policies. ## File De-Duplication/Duplication @@ -51,7 +51,7 @@ some of the reasons for that: - two urls that result the same identical resource ( many to one ). Some of the reasons for that: - a temporary URL for content access based on credentials - - bad programing or fear from caching + - bad programming or fear from caching - Privacy policies There is also the problem of content copying around the web. For @@ -89,7 +89,7 @@ just a longer url. many CMS like Wordpress use question mark to identify a specific page/article stored in the system. ("/wordpress/?p=941") -but insted exploting this convention the script authur can just add +but instead exploting this convention the script authur can just add Cache specific headers to allow or disallow caching the resource. ## HTTP and caching diff --git a/docs/ConfigExamples/DynamicContent/YouTube.md b/docs/ConfigExamples/DynamicContent/YouTube.md index 791e1cb8..1e65a714 100644 --- a/docs/ConfigExamples/DynamicContent/YouTube.md +++ b/docs/ConfigExamples/DynamicContent/YouTube.md @@ -181,7 +181,7 @@ per.php: //file not in cache? Get it, send it & save it logdata("MISS",$url,$fname); $fileptr=fopen($fname,"w"); - //no validity check, simply don't write the file if we can't open it. prevents noticeable failure/ + //no validity check, simply don't write the file if we cannot open it. prevents noticeable failure/ while(!feof($urlptr)){ $line=fread($urlptr,$blocksize); diff --git a/docs/ConfigExamples/FullyTransparentWithTPROXY.md b/docs/ConfigExamples/FullyTransparentWithTPROXY.md index 3536aefc..259f3d40 100644 --- a/docs/ConfigExamples/FullyTransparentWithTPROXY.md +++ b/docs/ConfigExamples/FullyTransparentWithTPROXY.md @@ -31,7 +31,7 @@ the tproxy include file needs to be placed in /usr/include/linux/netfilter_ipv4/ip_tproxy.h or include/netfilter_ipv4/ip_tproxy.h in the squid src tree). -TThe iptables rule needs to use the TPROXY target (instead of the +The iptables rule needs to use the TPROXY target (instead of the REDIRECT target) to redirect the port 80 traffic to the proxy. Ie: iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j TPROXY --on-port 80 @@ -175,7 +175,7 @@ balabit for kernel & iptables tproxy * check-up access.log --\> yes it is increments log check-up my pc by * opening whatismyipaddress.com --\> yes it is my pc's ip -Now, I will try tuning-up my box & squid.conf tommorow +Now, I will try tuning-up my box & squid.conf tomorrow ## Another Example diff --git a/docs/ConfigExamples/Intercept.md b/docs/ConfigExamples/Intercept.md index 6650214c..ed49873e 100644 --- a/docs/ConfigExamples/Intercept.md +++ b/docs/ConfigExamples/Intercept.md @@ -8,7 +8,7 @@ using any two devices the configurations have been separated into endpoint configurations. L2 forwarding is best suited for when the proxy is directly connected to -the router, i.e. presists in the same L2-segment of LAN. Since Layer-2 +the router, i.e. exists in the same L2-segment of LAN. Since Layer-2 is a level below TCP/IP it can be treated as equivalent to *Policy Routing* at the IP layer (the difference is PBR is executes on CPU, against true L2 WCCP forwarding, which often executes on control plane diff --git a/docs/ConfigExamples/Intercept/CiscoIOSv15Wccp2.md b/docs/ConfigExamples/Intercept/CiscoIOSv15Wccp2.md index fd5f9638..25990d3d 100644 --- a/docs/ConfigExamples/Intercept/CiscoIOSv15Wccp2.md +++ b/docs/ConfigExamples/Intercept/CiscoIOSv15Wccp2.md @@ -26,13 +26,13 @@ Router has both router/switch functionality, so we can use both GRE/L2 redirection methods. > :information_source: - Note: Beware - you must have NAT configuted on your squid's box, and + Note: Beware - you must have NAT configured on your squid's box, and you must have squid built with OS-specific NAT support. > :information_source: Note: When using managed switch in DMZ, be sure proxy box port in the same VLAN/has the same encapsulation as router port with WCCP - activated. Otherwise router can't do WCCP handshake with proxy. + activated. Otherwise router cannot do WCCP handshake with proxy. ### Cisco IOS 15.5(3)M2 router @@ -109,7 +109,7 @@ and passthrough default route to next hop (or last resort gateway). #### Security -To avoid denial-of-service attacks, you can enforce authentification +To avoid denial-of-service attacks, you can enforce authentication between proxy(proxies) and router. To do that you need to setup WCCP services on router using passwords: @@ -158,7 +158,7 @@ interception. > :information_source: Note: **Performance** is more better against PBR (route-map), WCCP - uses less CPU on Cisco's devices. So, WCCP is preferrable against + uses less CPU on Cisco's devices. So, WCCP is preferable against route-map. Also note, l2 redirection has hardware support and less overhead, than gre, which has only software processing (on CPU). diff --git a/docs/ConfigExamples/Intercept/DebianWithRedirectorAndReporting.md b/docs/ConfigExamples/Intercept/DebianWithRedirectorAndReporting.md index 415c1bdd..80b14a7c 100644 --- a/docs/ConfigExamples/Intercept/DebianWithRedirectorAndReporting.md +++ b/docs/ConfigExamples/Intercept/DebianWithRedirectorAndReporting.md @@ -142,7 +142,7 @@ After editing the configuration file, start squid Once the Squid has started, you should be able to browse the web from the LAN. Note that it is the Squid that provides HTTP connection to the -outside. If the Squid process crashes or is stopped, LAN clients won't +outside. If the Squid process crashes or is stopped, LAN clients will not be able to browse the web. To see in realtime the requests served by Squid, use the command @@ -223,7 +223,7 @@ get only safe content. (Note that Google is [gradually switching to HTTPS for all searches](http://support.google.com/websearch/bin/answer.py?hl=en&answer=173733). -As Squid only handles HTTP traffic, this won't work anymore. However, +As Squid only handles HTTP traffic, this will not work anymore. However, you get the idea.) [Download the latest version of Squirm diff --git a/docs/ConfigExamples/Intercept/IptablesPolicyRoute.md b/docs/ConfigExamples/Intercept/IptablesPolicyRoute.md index 921f3232..4f1c3bd4 100644 --- a/docs/ConfigExamples/Intercept/IptablesPolicyRoute.md +++ b/docs/ConfigExamples/Intercept/IptablesPolicyRoute.md @@ -14,7 +14,7 @@ traffic (web in this instance) towards a Squid proxy. Various networks are using embedded Linux devices (such as OpenWRT) as gateways and wish to implement transparent caching or proxying. -There's no obvious policy routing in Linux - you use iptables to mark +There is no obvious policy routing in Linux - you use iptables to mark interesting traffic, iproute2 ip rules to choose an alternate routing table and a default route in the alternate routing table to policy route to the distribution. diff --git a/docs/ConfigExamples/Intercept/LinuxBridge.md b/docs/ConfigExamples/Intercept/LinuxBridge.md index 700987c5..851886d4 100644 --- a/docs/ConfigExamples/Intercept/LinuxBridge.md +++ b/docs/ConfigExamples/Intercept/LinuxBridge.md @@ -21,7 +21,7 @@ implement transparent caching or content filtering. ## ebtables DROP vs iptables DROP In iptables which in most cases is being used to filter network traffic -the DROP target means "packet disapear". +the DROP target means "packet disappear". In ebtables a "-j redirect --redirect-target DROP" means "packet be gone from the bridge into the upper layers of the kernel such as diff --git a/docs/ConfigExamples/Intercept/SslBumpExplicit.md b/docs/ConfigExamples/Intercept/SslBumpExplicit.md index 6df71794..04bb516b 100644 --- a/docs/ConfigExamples/Intercept/SslBumpExplicit.md +++ b/docs/ConfigExamples/Intercept/SslBumpExplicit.md @@ -110,7 +110,7 @@ For example, in FireFox: 2. Go to the 'Advanced' section, 'Encryption' tab 3. Press the 'View Certificates' button and go to the 'Authorities' tab 4. Press the 'Import' button, select the .der file that was created - previously and pres 'OK' + previously and press 'OK' In theory, you must either import your root certificate into browsers or instruct users on how to do that. Unfortunately, it is apparently a @@ -169,7 +169,7 @@ library default "Global Trusted CA" set. This is done by not included (see below). Adding extra root CA in this way is your responsibility. Also beware, when you use OpenSSL, you need to make c_rehash utility before Squid can use the added certificates. - Beware - you can't grab any CA's you see. Check it before use\! + Beware - you cannot grab any CA's you see. Check it before use\! ### Missing intermediate certificates diff --git a/docs/ConfigExamples/MultiplePortsWithWccp2.md b/docs/ConfigExamples/MultiplePortsWithWccp2.md index 8e35d930..5768fbcd 100644 --- a/docs/ConfigExamples/MultiplePortsWithWccp2.md +++ b/docs/ConfigExamples/MultiplePortsWithWccp2.md @@ -8,15 +8,15 @@ categories: [ConfigExample] ## Outline The Squid WCCPv2 implementation can intercept more than TCP port 80. The -currrent implementation can create multiple arbitrary TCP and UDP ports. +current implementation can create multiple arbitrary TCP and UDP ports. There are a few caveats: - Squid will have to be configured to listen on each port - the [wccp2_service](http://www.squid-cache.org/Doc/config/wccp2_service) configuration only tells WCCPv2 what to do, not Squid; -- WCCPv2 (as far as I know) can't be told to redirect random dynamic - TCP sessions, only "fixed" service ports - so it can't intercept and +- WCCPv2 (as far as I know) cannot be told to redirect random dynamic + TCP sessions, only "fixed" service ports - so it cannot intercept and cache the FTP data streams; - You could use Squid to advertise services which are handled by "other" software running on the server (for example, if you had a diff --git a/docs/ConfigExamples/Reverse/ExchangeRpc.md b/docs/ConfigExamples/Reverse/ExchangeRpc.md index dec7c400..06432e25 100644 --- a/docs/ConfigExamples/Reverse/ExchangeRpc.md +++ b/docs/ConfigExamples/Reverse/ExchangeRpc.md @@ -8,7 +8,7 @@ categories: [ConfigExample] Squid can be used as an accelerator and ACL filter in front of an exchange server exporting mail via RPC over HTTP. The RPC_IN_DATA and RPC_OUT_DATA methods communicate with -_https://URL/rpc/rpcproxy.dll_, for if there's need to limit the +_https://URL/rpc/rpcproxy.dll_, for if there is need to limit the access.. ## Setup diff --git a/docs/ConfigExamples/SquidAndWccp2.md b/docs/ConfigExamples/SquidAndWccp2.md index 7fd4b7f3..a0b7145e 100644 --- a/docs/ConfigExamples/SquidAndWccp2.md +++ b/docs/ConfigExamples/SquidAndWccp2.md @@ -198,7 +198,7 @@ loosely sorted so that rules with more hits are higher up: -A INPUT -s ! 10.15.128.0/255.255.192.0 -p tcp -m tcp --sport 8080 -j ACCEPT # TCP DNS replies. Just in case -A INPUT -p tcp -m tcp --sport 53 -j ACCEPT - # SSH conection from admin server + # SSH connection from admin server -A INPUT -s 10.15.138.45 -p tcp -m tcp --dport 22 -j ACCEPT # Reject other SSH connections (optional) -A INPUT -s ! 10.15.128.0/255.255.192.0 -p tcp -m tcp --dport 22 -j REJECT --reject-with icmp-port-unreachable @@ -210,10 +210,10 @@ loosely sorted so that rules with more hits are higher up: # Accept some traceroute. 3 per second -A INPUT -p udp -m udp --dport 33434:33445 -m limit --limit 3/sec --limit-burst 3 -j ACCEPT # Log everything else, maybe add explicit rules to block certain traffic. - # Unnecesary but useful monitoring + # Unnecessary but useful monitoring -A INPUT -j LOG # Accept forwarded requests. - # Totally unnecesary, but allows for basic monitoring. + # Totally unnecessary, but allows for basic monitoring. -A FORWARD -s 10.15.128.0/255.255.192.0 -d ! 10.15.128.0/255.255.192.0 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -s 10.15.128.0/255.255.192.0 -d ! 10.15.128.0/255.255.192.0 -p tcp -m tcp --dport 3128 -j ACCEPT -A FORWARD -s 10.15.128.0/255.255.192.0 -d ! 10.15.128.0/255.255.192.0 -p tcp -m tcp --dport 8000 -j ACCEPT diff --git a/docs/ConfigExamples/Strange/BlockingTLD.md b/docs/ConfigExamples/Strange/BlockingTLD.md index e107759c..3ef70c72 100644 --- a/docs/ConfigExamples/Strange/BlockingTLD.md +++ b/docs/ConfigExamples/Strange/BlockingTLD.md @@ -23,6 +23,6 @@ Paste the configuration file like this: http_access deny block_tld deny_info TCP_RESET block_tld -Pay your attention, that we send TCP_RESET to client. So, he can't see +Pay your attention, that we send TCP_RESET to client. So, he cannot see we do it with our proxy. :smirk: diff --git a/docs/ConfigExamples/Strange/TorifiedSquid.md b/docs/ConfigExamples/Strange/TorifiedSquid.md index cd284df7..5bee8064 100644 --- a/docs/ConfigExamples/Strange/TorifiedSquid.md +++ b/docs/ConfigExamples/Strange/TorifiedSquid.md @@ -182,7 +182,7 @@ For squid 4.x+, adjust access_log settings as follows: > :information_source: Note: Currently you must **splice** Tor tunneled connections, - because of Squid can't re-crypt peer connections yet. It is + because of Squid cannot re-crypt peer connections yet. It is recommended to use this configuration in bump-enabled setups. > :information_source: @@ -195,5 +195,5 @@ For squid 4.x+, adjust access_log settings as follows: Tor-tunneled HTTP connections has better performance, because of caching. However, HTTPS connections still limited by Tor performance, -because of splice required and they can't be caching in this +because of splice required and they cannot be caching in this configuration in any form. Note this. diff --git a/docs/ConfigExamples/TorrentFiltering.md b/docs/ConfigExamples/TorrentFiltering.md index 826a5d60..16343f1f 100644 --- a/docs/ConfigExamples/TorrentFiltering.md +++ b/docs/ConfigExamples/TorrentFiltering.md @@ -7,7 +7,7 @@ categories: [ConfigExample] ## Outline -Torrent filtering is a diffucult problem. which can't be solved easily. +Torrent filtering is a difficult problem. which cannot be solved easily. To difficult this for users you can first deny download .torrent files. ## Usage @@ -15,7 +15,7 @@ To difficult this for users you can first deny download .torrent files. You can also enforce this task uses [NBAR protocol discovery](http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_nbar/configuration/xe-3s/qos-nbar-xe-3s-book/nbar-protocl-discvry.html) (DPI functionality) in your router (ISR G-2 and above 29xx Cisco series -or similar). Only Squid can't completely block torrents your wish. +or similar). Only Squid cannot completely block torrents your wish. ## Squid Configuration File diff --git a/docs/ConfigExamples/UbuntuTproxy4Wccp2.md b/docs/ConfigExamples/UbuntuTproxy4Wccp2.md index f298c435..94c2286e 100644 --- a/docs/ConfigExamples/UbuntuTproxy4Wccp2.md +++ b/docs/ConfigExamples/UbuntuTproxy4Wccp2.md @@ -10,10 +10,10 @@ by *Eliezer Croitoru* WCCP stands for ["Web Cache Communication Protocol"](http://en.wikipedia.org/wiki/Web_Cache_Communication_Protocol) What is good about WCCP? WCCP allows separation of duties between the -network and the application and there for Auto redundency. +network and the application and there for Auto redundancy. the router has couple junctions that it can intercept on routing level -dynamicly packets. on every interface/vlan there is a "IN" and "OUT". +dynamically packets. on every interface/vlan there is a "IN" and "OUT". IN stands for incoming packets and OUT stands for OUTGOING packets. the WCCP daemon on the cisco router gets information about the Cache supplier and service. then on the cisco router we can define ACLs to @@ -21,7 +21,7 @@ apply the service on besides the Cache settings supplied by the cache. the Cache supplier can interact in two ways with cisco devices: GRE tunnel and Layer 2 SWITCHING forwarding. when used with a GRE tunnel all -the traffic that comes and goes to the client are transfered to the +the traffic that comes and goes to the client are transferred to the proxy on the GRE tunnel instead the cisco router forwards packets to "hijack" encapsulated in the gre @@ -39,7 +39,7 @@ loop. so instead of applying regulare WCCP ACLs we are applying another ACL built in WCCP and this is the EXLUDE. the EXCLUDE applies only on Interface (or vlan interface) so we need to -separte the traffic of the clients and the proxy. in our case we use +separate the traffic of the clients and the proxy. in our case we use another interface. on the router we use interface f1/0 for clients, f1/0 for the proxy and f0/0 to the internet. @@ -69,7 +69,7 @@ you do know basic Networking and cisco cli basics. you do know what a GRE tunnel is. -## Toplogy +## Topology ![wccp2_vlan.png](/assets/images/wccp2-vlan.png) diff --git a/docs/ConfigExamples/WebwasherChained.md b/docs/ConfigExamples/WebwasherChained.md index 9dde1b39..276f3c07 100644 --- a/docs/ConfigExamples/WebwasherChained.md +++ b/docs/ConfigExamples/WebwasherChained.md @@ -179,7 +179,7 @@ types of queries directly from the Squid to the web server. ## Webwasher configuration Since the configuration options in the web interface have moved between -version 5.x and 6.x I won't describe the exact path. If you don't know +version 5.x and 6.x I will not describe the exact path. If you don't know where to find a certain option just use the search box on the top right. First of all define your *profiles*. You will probably already have an diff --git a/docs/CookiePolicy.md b/docs/CookiePolicy.md index 434a2ca0..52df92c3 100644 --- a/docs/CookiePolicy.md +++ b/docs/CookiePolicy.md @@ -8,7 +8,7 @@ way to connect this information to the user's identity or to track users' behaviors. This cookie is randomly created the first time the user visits the website and is only used for technical purposes. Users are free to use their browsers' technical features and not to accept -this cookie; apart from a slight degradation in useability of the site, +this cookie; apart from a slight degradation in usability of the site, there will be no adverse effects for non-registered users. This website might be hosted on [Github Pages](https://pages.github.com/) diff --git a/docs/DeveloperResources/ClientStreams.md b/docs/DeveloperResources/ClientStreams.md index 9a695006..5ce416fa 100644 --- a/docs/DeveloperResources/ClientStreams.md +++ b/docs/DeveloperResources/ClientStreams.md @@ -11,13 +11,13 @@ an IRC chat about ClientStreams, it needs to be cleaned up and made more organised... ```irc -14:48 < nicholas> Hi. I'm working on bug 1160 (analyze HTML to prefetch embedded objects). I can't figure out why, but even though it +14:48 < nicholas> Hi. I'm working on bug 1160 (analyze HTML to prefetch embedded objects). I cannot figure out why, but even though it fetches the pages, it doesn't cache the result! The fetch is initiated with "fwdState(-1, entry, request);". 14:49 < lifeless> I'd use the same mechanism ESI does. 14:49 < nicholas> Ok, that's client streams. 14:49 < lifeless> the fwdState api is on the wrong side of the store 14:49 < nicholas> doh! -14:49 < lifeless> so it doesn't have any of the required logic - cachability, vary handling, updates of existing opbjects... +14:49 < lifeless> so it doesn't have any of the required logic - cachability, vary handling, updates of existing objects... 14:50 < lifeless> things like store digests just haven't been updated to use client streams yet. 14:50 < nicholas> What, concisely, is a store digest? 14:51 < lifeless> a bitmap that lossilly represents the contents of an entire squid cache, biased to hits. @@ -45,11 +45,11 @@ organised... 15:00 < nicholas> For a ClientStreamData, I'm supposed to create my own Data class which is derived from, er, Refcountable? Then let the ClientStreamData's internal pointer point to my object, then upcast it when my callbacks are called? 15:01 < nicholas> See, I don't really understand what my callbacks are really supposed to do, since I only want "default" behaviour. As - in, whatever squid normally does to cache/handle a request, expect that there's no sender to send it to. + in, whatever squid normally does to cache/handle a request, expect that there is no sender to send it to. 15:02 < lifeless> well you don't want that. 15:02 < lifeless> because you don't want to parse requests. 15:02 < lifeless> ClientSocketContext is likely to be the closest thing to what you want though. -15:03 < lifeless> so your readfunc needs to eat all the data it recieves. +15:03 < lifeless> so your readfunc needs to eat all the data it receives. 15:04 < lifeless> you can throw it away. 15:04 < lifeless> your detach function can just call clientStreamDetach(node, http); 15:04 < nicholas> so do I add my function into ClientSocketContext's read function? @@ -61,7 +61,7 @@ organised... 15:05 < lifeless> right, you should have that already written though - whatever is doing the parsing should already be a clientStream 15:06 < nicholas> Nope. I just hacked it into http.cc. 15:06 < lifeless> if its not, then don't worry for now, get it working is the first step. -15:06 < nicholas> Not that I can't move it pretty easily. +15:06 < nicholas> Not that I cannot move it pretty easily. 15:06 < nicholas> Everything works, except that it doesn't cache what it fetches. And now I know why. 15:06 < lifeless> your Status calls should always return prev()->status() 15:07 < lifeless> the callback call is the one that is given the data, it too should throw it away. @@ -85,10 +85,10 @@ organised... 15:13 < nicholas> stream.getRaw() is a pointer to the node, yes? I could the code around that confusing. 15:14 < lifeless> stream is a ESIStreamContext which is a clientStream node that pulls data from a clientstream, instances of which are used by both the master esi document and includes -15:14 < lifeless> (different instances, but hte logic is shared by composition) +15:14 < lifeless> (different instances, but the logic is shared by composition) 15:14 < lifeless> that is pased into ESIInclude::Start because ESI includes have a primary include and an 'alternate' include. 15:16 < lifeless> so all you need to start the chain is: -15:16 < nicholas> I see. I won't need to worry about any of that. +15:16 < nicholas> I see. I will not need to worry about any of that. 15:16 < lifeless> HttpHeader tempheaders(hoRequest); 15:17 < lifeless> if (clientBeginRequest(METHOD_GET, url, aBufferRecipient, aBufferDetach, aStreamInstance, &tempheaders, aStreamInstance->buffer->buf, HTTP_REQBUF_SZ)) @@ -130,7 +130,7 @@ organised... 15:25 < lifeless> and likewise for the Detach static method 15:26 < lifeless> is this making sense ? 15:27 < nicholas> yes, but just let me reread a litt.e -15:27 < lifeless> ok, theres one more important thing :) +15:27 < lifeless> ok, there is one more important thing :) 15:27 < nicholas> "static_cast(node->data)->bufferData(node, ...)" calls myStream::BufferData doesn't it? So why am I calling myself? 15:28 < lifeless> lowercase bufferData :) @@ -162,7 +162,7 @@ organised... 15:35 < lifeless> ok, where to put the analyzer ? we've got some rework we want to do in the request flow that would make this a lot easier to answer. 15:35 < lifeless> I think that the right place for now, is exactly where esi goes, and after esi in the chain. -15:35 < lifeless> the problem with where you are is that ftp pages won't be analysed. and if its an esi upstream then the urls could be +15:35 < lifeless> the problem with where you are is that ftp pages will not be analysed. and if its an esi upstream then the urls could be wrong (for instance) 15:35 < nicholas> http requests that come in from clients have a client stream chain? 15:36 < lifeless> yup @@ -189,7 +189,7 @@ organised... 15:38 < lifeless> so right before that #if ESI line. 15:39 < nicholas> Oh, I see it has the body at this point already? 15:39 < nicholas> Or does it just have a partial body? -15:39 < lifeless> it may have some body, but it definately has the reply metadata +15:39 < lifeless> it may have some body, but it definitely has the reply metadata 15:39 < nicholas> Because my code is rigged to work with partial data. 15:39 < nicholas> ok, good. 15:39 < nicholas> Then that's *exactly* right. diff --git a/docs/DeveloperResources/RequestUseCases.md b/docs/DeveloperResources/RequestUseCases.md index 12f9559e..0a42cfcc 100644 --- a/docs/DeveloperResources/RequestUseCases.md +++ b/docs/DeveloperResources/RequestUseCases.md @@ -60,7 +60,7 @@ Socket - an fd on unix, a HANDLE on windows. - Socket holds `CallbackReference` to the comms layer to notify it of close. 1. New Socket is passed to the listening factory for the port it was - recieved on. + received on. - Factory constructs `HttpClientConnection` to represent the Socket at the protocol layer. - Factory cals `Socket.setClient(HttpClientConnection)` @@ -69,7 +69,7 @@ Socket - an fd on unix, a HANDLE on windows. - `HttpClientConnection` holds `CallbackReference` to the Socket. 1. `HttpClientConnection` calls read() on the Socket - For some systems, the read is scheduled on the socket now. For - others, when the next event loop occurs, the read willl be done. + others, when the next event loop occurs, the read will be done. - Socket gets a `RefCount` reference to the dispatcher. 1. Socket requests read from the OS (if it was not already scheduled) 1. read completes @@ -155,7 +155,7 @@ Socket - an fd on unix, a HANDLE on windows. - `SocketClient` has a weak reference to the Socket: It new Client owns the socket. Nothing owns the Client. Socket has callback to the client to notify on events : `ReadPossible`(data has - arrived), Close(by request or external occurence). Other events + arrived), Close(by request or external occurrence). Other events get callbacks as each is queued - ask the socket to read and hand the callback to be called in. This could be 'this' if we structure the ap well, or it could be some other thing. @@ -189,7 +189,7 @@ Socket - an fd on unix, a HANDLE on windows. 1. `ClientRequest` asks for a response to this normalised request from the URL mapper at the core of squid Socket has callbacks to `SocketClient` `SocketClient` owns Socket, and owns the - `ClientRequest` it has created. `ClientRequest` has calbacks to + `ClientRequest` it has created. `ClientRequest` has callbacks to `SocketClient` to call on events: `WillNotReadAnyMore`, `SocketMustBeClosed`, `SocketMustBeReset`. 1. the URL mapper determines (based on the scheme or url path) that the @@ -206,8 +206,8 @@ Socket - an fd on unix, a HANDLE on windows. request to read data. 11. the client -## Uncacheable request +## Uncacheble request ## Tunnel request -## Cachable request +## Cacheble request diff --git a/docs/DeveloperResources/SquidCodingGuidelines.md b/docs/DeveloperResources/SquidCodingGuidelines.md index d217c2e9..1716a63e 100644 --- a/docs/DeveloperResources/SquidCodingGuidelines.md +++ b/docs/DeveloperResources/SquidCodingGuidelines.md @@ -32,10 +32,10 @@ `if (T a = b)...`. -> :warning: The formater is known to enforce some weird indentation at times. +> :warning: The formatter is known to enforce some weird indentation at times. Notably after `#if ... #endif` directives. If you find these, please ignore for now. They will be corrected in a later version of the -formater. +formatter. ## Mandatory coding rules diff --git a/docs/DeveloperResources/index.md b/docs/DeveloperResources/index.md index 62889f2a..4665de86 100644 --- a/docs/DeveloperResources/index.md +++ b/docs/DeveloperResources/index.md @@ -57,7 +57,7 @@ test a local checkout on it is to run the command: ./test-builds.sh squidcache/buildfarm-`uname -m`-$OS --verbose --use-config-cache --cleanup` -It may leave behind some files owned by UID 1000; sorry it can't be +It may leave behind some files owned by UID 1000; sorry it cannot be avoided ## Detecting build errors early diff --git a/docs/EliezerCroitoru.md b/docs/EliezerCroitoru.md index a061ed16..3089cd89 100644 --- a/docs/EliezerCroitoru.md +++ b/docs/EliezerCroitoru.md @@ -61,7 +61,7 @@ I have a complex lab setup with every major OS: - Windows Desktop+Server(2k12,2k16,2k19,7,8.1,10) - - Linux Desktop+Server+Router(CentOS,Ubunut,Debian,Alpine,Arch..) + - Linux Desktop+Server+Router(CentOS,Ubuntu,Debian,Alpine,Arch..) - BSD(Free.Open.Nano,BSDRP) diff --git a/docs/EliezerCroitoru/Dnsbl/client_helper.md b/docs/EliezerCroitoru/Dnsbl/client_helper.md index 559a7f7d..a12905d6 100644 --- a/docs/EliezerCroitoru/Dnsbl/client_helper.md +++ b/docs/EliezerCroitoru/Dnsbl/client_helper.md @@ -102,7 +102,7 @@ def process(line) res << "Blacklisted" else #unknown or error - debug("unkown issue") if $debug + debug("unknown issue") if $debug end rescue Exception => e debug(e) diff --git a/docs/EliezerCroitoru/GoLangDelayer.md b/docs/EliezerCroitoru/GoLangDelayer.md index 8770b4a9..662f532d 100644 --- a/docs/EliezerCroitoru/GoLangDelayer.md +++ b/docs/EliezerCroitoru/GoLangDelayer.md @@ -42,7 +42,7 @@ func process_request(line string, wg *sync.WaitGroup) { lparts := strings.Split(strings.TrimRight(line, "\n"), " ") if len(lparts[0]) > 0 { if *debug { - fmt.Fprintln(os.Stderr, "ERRlog: Request nubmer => "+lparts[0]+"") + fmt.Fprintln(os.Stderr, "ERRlog: Request number => "+lparts[0]+"") } } else { return diff --git a/docs/EliezerCroitoru/GolangFakeHelper.md b/docs/EliezerCroitoru/GolangFakeHelper.md index ff332b18..0dfcd600 100644 --- a/docs/EliezerCroitoru/GolangFakeHelper.md +++ b/docs/EliezerCroitoru/GolangFakeHelper.md @@ -38,7 +38,7 @@ func process_request(line string, wg *sync.WaitGroup) { lparts := strings.Split(strings.TrimRight(line, "\n"), " ") if len(lparts[0]) > 0 { if *debug == "yes" { - fmt.Fprintln(os.Stderr, "ERRlog: Proccessing request => \""+strings.TrimRight(line, "\n")+"\"") + fmt.Fprintln(os.Stderr, "ERRlog: Processing request => \""+strings.TrimRight(line, "\n")+"\"") } } fmt.Println(lparts[0] + " " + *answer) diff --git a/docs/EliezerCroitoru/Helpers/DomainsLogger.md b/docs/EliezerCroitoru/Helpers/DomainsLogger.md index 1ac0a8b0..647cfde7 100644 --- a/docs/EliezerCroitoru/Helpers/DomainsLogger.md +++ b/docs/EliezerCroitoru/Helpers/DomainsLogger.md @@ -76,7 +76,7 @@ func process_request(line string) { if len(lparts[0]) > 0 { if *debug { - fmt.Fprintln(os.Stderr, "ERRlog: Proccessing request => \""+strings.TrimRight(line, "\n")+"\"") + fmt.Fprintln(os.Stderr, "ERRlog: Processing request => \""+strings.TrimRight(line, "\n")+"\"") } } diff --git a/docs/EliezerCroitoru/Helpers/YT-Watch-Stats.md b/docs/EliezerCroitoru/Helpers/YT-Watch-Stats.md index 04d39a60..370bd5a3 100644 --- a/docs/EliezerCroitoru/Helpers/YT-Watch-Stats.md +++ b/docs/EliezerCroitoru/Helpers/YT-Watch-Stats.md @@ -15,7 +15,7 @@ published: false Croitoru](/Eliezer%20Croitoru) - [NgTech](http://www1.ngtech.co.il/) - - **Proejct git(with binaries)**: [NgTech git: + - **Project git(with binaries)**: [NgTech git: youtube-watch-counter](http://gogs.ngtech.co.il/elicro/youtube-watch-counter) This helper is a part of a suite that analyze requests and schedules a @@ -303,7 +303,7 @@ func process_request(line string, wg *sync.WaitGroup) { lparts := strings.Split(strings.TrimRight(line, "\n"), " ") if len(lparts) > 1 && len(lparts[0]) > 0 && len(lparts[1]) > 0 { if *debug { - fmt.Fprintln(os.Stderr, "ERRlog: Proccessing request => \""+strings.TrimRight(line, "\n")+"\"") + fmt.Fprintln(os.Stderr, "ERRlog: Processing request => \""+strings.TrimRight(line, "\n")+"\"") } switch { case re[0].MatchString(lparts[1]): diff --git a/docs/Features/AddonHelpers.md b/docs/Features/AddonHelpers.md index c5fc5c6b..176273aa 100644 --- a/docs/Features/AddonHelpers.md +++ b/docs/Features/AddonHelpers.md @@ -507,7 +507,7 @@ Result line sent back to Squid: URL-rewrite](https://wiki.squid-cache.org/Features/AddonHelpers/Features/StoreUrlRewrite#) feature helpers written for [Squid-2.7](https://wiki.squid-cache.org/Features/AddonHelpers/Squid-2.7#). - However thst syntax is deprecated and such helpers should be + However that syntax is deprecated and such helpers should be upgraded as soon as possible to use this Store-ID syntax. ### Authenticator diff --git a/docs/Features/Authentication.md b/docs/Features/Authentication.md index f37f9502..a5a3c6d0 100644 --- a/docs/Features/Authentication.md +++ b/docs/Features/Authentication.md @@ -75,7 +75,7 @@ exchanged in plain text over the wire. Each scheme have their own set of helpers and [auth_param](http://www.squid-cache.org/Doc/config/auth_param) settings. Notice that helpers for different authentication schemes use -different protocols to talk with squid, so they can't be mixed. +different protocols to talk with squid, so they cannot be mixed. For information on how to set up NTLM authentication see [NTLM config examples](/ConfigExamples/Authenticate/Ntlm). @@ -353,7 +353,7 @@ this document but usually it's not in plain text. In side-band authentication, using the [external_acl_type](http://www.squid-cache.org/Doc/config/external_acl_type) -directive. There is a *password=* value which is possibly transfered to +directive. There is a *password=* value which is possibly transferred to Squid from the helper. This value is entirely **optional** and may in fact have no relation to a real password so we cannot be certain what risks are actually involved. When received it is generally treated by diff --git a/docs/Features/AutoCacheDirSizing.md b/docs/Features/AutoCacheDirSizing.md index c3154fc6..753210a1 100644 --- a/docs/Features/AutoCacheDirSizing.md +++ b/docs/Features/AutoCacheDirSizing.md @@ -17,14 +17,14 @@ categories: WantedFeature Fernando Ulisses dos Santos suggests to create a option in cache_dir param, like this: `cache_dir /var/spool/squid AUTO` -where AUTO indicates that squid may use all avaliable space in disc, but +where AUTO indicates that squid may use all available space in disc, but auto-decrease when the disc is near of being full. It may have a parameter like always leave 10% free on the partition, if it's above, call the auto-clean function. -this may help administrators on: - minimize effort on instalation (don't +this may help administrators on: - minimize effort on installation (don't need to know how many directories, space, etc) - maximize network -performance (using all avaliable space in disk, when avaliable) - +performance (using all available space in disk, when available) - minimize downtime (when other program fill the disk) [HenrikNordström](/HenrikNordstrom) diff --git a/docs/Features/BearerAuthentication.md b/docs/Features/BearerAuthentication.md index 0de0ec16..0f331d1e 100644 --- a/docs/Features/BearerAuthentication.md +++ b/docs/Features/BearerAuthentication.md @@ -74,7 +74,7 @@ part has been kept intentionally minor and simple to improve the overall system security. > :warning: - squid only implements the **Autorization header field** Bearer + squid only implements the **Authorization header field** Bearer tokens. Alternative *Form field* method is not compatible with HTTP proxy needs and method *URI query parameter* is too insecure to be trustworthy. diff --git a/docs/Features/BetterStringBuffer.md b/docs/Features/BetterStringBuffer.md index a81807f9..e8ebdea7 100644 --- a/docs/Features/BetterStringBuffer.md +++ b/docs/Features/BetterStringBuffer.md @@ -121,7 +121,7 @@ or loose it. > :information_source: The above design will work, but there are alternatives. Can you compare the above with a simpler design where the buffer is locked - by stings using it, but does not point back to them; if a string + by strings using it, but does not point back to them; if a string needs to be modified and the buffer has more than one lock, the buffer (or its affected portion) is copied for that string use, without any affect on other strings. @@ -195,7 +195,7 @@ necessary. its portion is duplicated by the user code. Indeed, that is the intent. I use 'parent' and 'own' to differentiate -the case where these objects are referring to a seperate object 'parent' +the case where these objects are referring to a separate object 'parent' (shared buffer by offset+lock on the external object) or has master-control over a buffer (responsibiity for: allocate, de-allocate, notify-cascade initiate on changes) diff --git a/docs/Features/BrowsableStorage.md b/docs/Features/BrowsableStorage.md index 5e6d118e..db618b33 100644 --- a/docs/Features/BrowsableStorage.md +++ b/docs/Features/BrowsableStorage.md @@ -15,7 +15,7 @@ categories: WantedFeature From IRC: ```irc - 14:55:07) derekv: One functionality I would like is (using .pdf as an example) to have all pdfs that are downloaded through the proxy to be stored in an archive that is seperate from the cache, and where they are normal files that can be for example indexed and searched. + 14:55:07) derekv: One functionality I would like is (using .pdf as an example) to have all pdfs that are downloaded through the proxy to be stored in an archive that is separate from the cache, and where they are normal files that can be for example indexed and searched. (14:55:47) derekv: (one way this could be organized is to put them in a file structure similar to how wget does it when doing recursive downloads) (14:56:33) derekv: Unless there is some way to configure this, the easy way seems to be to use squidsearch with a periodic script to extract the files and move them to the store (14:58:00) derekv: But it seems like it would be even more clever if squid could be aware of the separate store, eg, "if the requested file is a .pdf, look in pdf archive", thus the store would act as a cache as well. diff --git a/docs/Features/CacheHierarchy.md b/docs/Features/CacheHierarchy.md index 46deae96..46c32353 100644 --- a/docs/Features/CacheHierarchy.md +++ b/docs/Features/CacheHierarchy.md @@ -146,14 +146,14 @@ Visit the NLANR cache [registration database](http://www.ircache.net/Cache/Tracker/) to discover other caches near you. Keep in mind that just because a cache is registered in the database **does not** mean they are willing to be your -parent/sibling/child. But it can't hurt to ask... +parent/sibling/child. But it cannot hurt to ask... ## Troubleshooting ### My cache registration is not appearing in the Tracker database. - Your site will not be listed if your cache IP address does not have - a DNS PTR record. If we can't map the IP address back to a domain + a DNS PTR record. If we cannot map the IP address back to a domain name, it will be listed as "Unknown." - The registration messages are sent with UDP. We may not be receiving your announcement message due to firewalls which block UDP, or diff --git a/docs/Features/CacheManager/Index.md b/docs/Features/CacheManager/Index.md index a960dcba..fe3968cc 100644 --- a/docs/Features/CacheManager/Index.md +++ b/docs/Features/CacheManager/Index.md @@ -36,7 +36,7 @@ Squid packages come with some tools for accessing the cache manager: Given that the Cache Manager uses plain HTTP, it's possible - and in fact easy - to develop custom tools. The most common one is curl, e.g. -`curl -u user:pasword http://127.0.0.1:3128/squid-internal-mgr/menu` +`curl -u user:password http://127.0.0.1:3128/squid-internal-mgr/menu` ## Controlling access to the cache manager @@ -191,7 +191,7 @@ of course. ## Understanding the manager reports -### What's the difference between Squid TCP connections and Squid UDP connections? +### What is the difference between Squid TCP connections and Squid UDP connections? Browsers and caches use TCP connections to retrieve web objects from web servers or caches. UDP connections are used when another cache using you @@ -253,7 +253,7 @@ by *Jonathan Larmour* You get a "page fault" when your OS tries to access something in memory which is actually swapped to disk. The term "page fault" while correct -at the kernel and CPU level, is a bit deceptive to a user, as there's no +at the kernel and CPU level, is a bit deceptive to a user, as there is no actual error - this is a normal feature of operation. Also, this doesn't necessarily mean your squid is swapping by that much. @@ -278,27 +278,27 @@ directive in squid.conf than allow this to happen. by *Peter Wemm* -There's two different operations at work, Paging and swapping. Paging is +There are two different operations at work, Paging and swapping. Paging is when individual pages are shuffled (either discarded or swapped to/from disk), while "swapping" *generally* means the entire process got sent to/from disk. Needless to say, swapping a process is a pretty drastic event, and -usually only reserved for when there's a memory crunch and paging out -cannot free enough memory quickly enough. Also, there's some variation +usually only reserved for when there is a memory crunch and paging out +cannot free enough memory quickly enough. Also, there is some variation on how swapping is implemented in OS's. Some don't do it at all or do a hybrid of paging and swapping instead. As you say, paging out doesn't necessarily involve disk IO, eg: text (code) pages are read-only and can simply be discarded if they are not used (and reloaded if/when needed). Data pages are also discarded if -unmodified, and paged out if there's been any changes. Allocated memory -(malloc) is always saved to disk since there's no executable file to +unmodified, and paged out if there has been any changes. Allocated memory +(malloc) is always saved to disk since there is no executable file to recover the data from. mmap() memory is variable.. If it's backed from a file, it uses the same rules as the data segment of a file - ie: either discarded if unmodified or paged out. -There's also "demand zeroing" of pages as well that cause faults.. If +There is also "demand zeroing" of pages as well that cause faults.. If you malloc memory and it calls brk()/sbrk() to allocate new pages, the chances are that you are allocated demand zero pages. Ie: the pages are not "really" attached to your process yet, but when you access them for @@ -333,7 +333,7 @@ squids on FreeBSD is reported to work better - the VM/buffer system could be competing with squid to cache the same pages. It's a pity that squid cannot use mmap() to do file IO on the 4K chunks in it's memory pool (I can see that this is not a simple thing to do though, but that -won't stop me wishing. :-). +will not stop me wishing. :-). by *John Line* @@ -348,7 +348,7 @@ behind-the-scenes.) The effect of this is that on Solaris 2, paging figures will also include file I/O. Or rather, the figures from vmstat certainly appear to -include file I/O, and I presume (but can't quickly test) that figures +include file I/O, and I presume (but cannot quickly test) that figures such as those quoted by Squid will also include file I/O. To confirm the above (which represents an impression from what I've read diff --git a/docs/Features/CacheManager/IpCache.md b/docs/Features/CacheManager/IpCache.md index 73ca67a3..f6d3d6eb 100644 --- a/docs/Features/CacheManager/IpCache.md +++ b/docs/Features/CacheManager/IpCache.md @@ -37,7 +37,7 @@ IP Cache Contents: ``` ## FAQ about this report -### What's the difference between a hit, a negative hit and a miss? +### i the difference between a hit, a negative hit and a miss? - A HIT means the domain was found in the cache. - A MISS means the domain was not found in the cache. diff --git a/docs/Features/CacheManager/SquidClientTool.md b/docs/Features/CacheManager/SquidClientTool.md index a695099e..a1296535 100644 --- a/docs/Features/CacheManager/SquidClientTool.md +++ b/docs/Features/CacheManager/SquidClientTool.md @@ -38,4 +38,4 @@ depending on your Squid version. - squidclient version 3.1.\* and older you add **@** then the password to the URL. So that it looks like this `mgr:info@admin`. - squidclient version 3.2.\* use the proxy login options **-u** and - **w** to pass your admin login to the cache manger. + **w** to pass your admin login to the cache manager. diff --git a/docs/Features/ClientSideCleanup.md b/docs/Features/ClientSideCleanup.md index 9713fd19..320132b8 100644 --- a/docs/Features/ClientSideCleanup.md +++ b/docs/Features/ClientSideCleanup.md @@ -38,7 +38,7 @@ and, hence, has been called *client side*. - reading HTTP/1.1 frames (request headers block, body blocks) - writing HTTP/1.1 frames (response headers block, 1xx headers block, body blocks) - generate HttpParser, ClientSocketContext and other AsyncJobs - to operate on teh above frames types as needed + to operate on the above frames types as needed ### In Progress diff --git a/docs/Features/CodeTestBed.md b/docs/Features/CodeTestBed.md index 4b78ecc1..77739341 100644 --- a/docs/Features/CodeTestBed.md +++ b/docs/Features/CodeTestBed.md @@ -46,7 +46,7 @@ current developer practices. ### Taking Part in the Testing -see [BuildFarm](/BuildFarm) on whats needed and how to volunteer +see [BuildFarm](/BuildFarm) on what is needed and how to volunteer time on a machine as a test slave. ### Tasks needing a volunteer: diff --git a/docs/Features/CollapsedForwarding.md b/docs/Features/CollapsedForwarding.md index fae39344..1559e1f8 100644 --- a/docs/Features/CollapsedForwarding.md +++ b/docs/Features/CollapsedForwarding.md @@ -20,7 +20,7 @@ for the same URI to be processed as one request to the backend server. Normally disabled to avoid increased latency on dynamic content, but there can be benefit from enabling this in accelerator setups where the web servers are the bottleneck but are reliable and return mostly -cacheable information. +cachable information. It was left out of [Squid-3.0](/Releases/Squid-3.0) due to time and stability constraints. The diff --git a/docs/Features/ConnPin.md b/docs/Features/ConnPin.md index 55b21f71..5e0d57c6 100644 --- a/docs/Features/ConnPin.md +++ b/docs/Features/ConnPin.md @@ -25,7 +25,7 @@ servers using Microsoft Integrated Login (NTLM/Negotiate), it needs: - code to activate the tying when a stateful authentication layer is seen - code to mark the objects downloaded over a pinned connection - uncacheable + uncachable - code to add a header advertising this capability to clients The HTTP protocol extensions used to negotiate this is documented in diff --git a/docs/Features/CppCodeFormat.md b/docs/Features/CppCodeFormat.md index a12fffb9..7c61f78b 100644 --- a/docs/Features/CppCodeFormat.md +++ b/docs/Features/CppCodeFormat.md @@ -26,7 +26,7 @@ See [Doing a Reformat](#doing-a-reformat) omit the format step if you do not have the right version. Poorly formatted code is often difficult to read so if you do have the -right tools, consider formating before sending a \[PATCH\] or \[MERGE\] +right tools, consider formatting before sending a \[PATCH\] or \[MERGE\] request to squid-dev for auditing (or before committing accepted code). A global reformat is repeated regularly on trunk, but it saves everybody trouble and keeps bundlebuggy happy if patches have the right format to diff --git a/docs/Features/CustomErrors.md b/docs/Features/CustomErrors.md index 9b327273..921a19fe 100644 --- a/docs/Features/CustomErrors.md +++ b/docs/Features/CustomErrors.md @@ -172,7 +172,7 @@ browser behaviour handling these CONNECT messages (described in error page produced by the proxy is ignored and a generic browser page displayed instead. -Usually this browser page mentions connection faulure or other such +Usually this browser page mentions connection failure or other such irrelevant details. In fact any response other than **200 OK** is completely dropped by the diff --git a/docs/Features/DelayPools.md b/docs/Features/DelayPools.md index 4ccc5c75..8bc135b1 100644 --- a/docs/Features/DelayPools.md +++ b/docs/Features/DelayPools.md @@ -142,7 +142,7 @@ microwave (ATM) network. For our local access we use a dstdomain ACL, and for delay pool exceptions we use a dst ACL as well since the delay pool ACL processing -is done using "fast lookups", which means (among other things) it won't +is done using "fast lookups", which means (among other things) it will not wait for a DNS lookup if it would need one. Our proxy has two virtual interfaces, one which requires student diff --git a/docs/Features/DetectVariantUri.md b/docs/Features/DetectVariantUri.md index 107d7abf..73ba4d47 100644 --- a/docs/Features/DetectVariantUri.md +++ b/docs/Features/DetectVariantUri.md @@ -10,7 +10,7 @@ Anyone on Development Team ### Description A number of sites around the web send out identical content on different -URI. This is often found occuring due to: +URI. This is often found occurring due to: - bad designs in load-balancing - attempts at explicit cache-busting - non-compliance with HTTP privacy standards diff --git a/docs/Features/DiskDaemon.md b/docs/Features/DiskDaemon.md index 5024c080..3587e75a 100644 --- a/docs/Features/DiskDaemon.md +++ b/docs/Features/DiskDaemon.md @@ -177,7 +177,7 @@ Add this command into your *RunCache* or *squid_start* script: ## What are the Q1 and Q2 parameters? In the source code, these are called *magic1* and *magic2*. These -numbers refer to the number of oustanding requests on a message queue. +numbers refer to the number of outstanding requests on a message queue. They are specified on the *cache_dir* option line, after the L1 and L2 directories: diff --git a/docs/Features/Dnsserver.md b/docs/Features/Dnsserver.md index 7bb22007..88e42657 100644 --- a/docs/Features/Dnsserver.md +++ b/docs/Features/Dnsserver.md @@ -69,7 +69,7 @@ a lot of requests, the second one less than the first, etc. The last *dnsserver* should have serviced relatively few requests. If there is not an obvious decreasing trend, then you need to increase the number of *dns_children* in the configuration file. If the last *dnsserver* has -zero requests, then you definately have enough. +zero requests, then you definitely have enough. Another factor which affects the DNS service time is the proximity of your DNS resolver. Normally we do not recommend running Squid and diff --git a/docs/Features/DynamicSslCert.md b/docs/Features/DynamicSslCert.md index a6984b4d..eb3bc020 100644 --- a/docs/Features/DynamicSslCert.md +++ b/docs/Features/DynamicSslCert.md @@ -131,7 +131,7 @@ For example, in FireFox: 1. Go to the 'Advanced' section, 'Encryption' tab 1. Press the 'View Certificates' button and go to the 'Authorities' tab 1. Press the 'Import' button, select the .der file that was created - previously and pres 'OK' + previously and press 'OK' In theory, you must either import your root certificate into browsers or instruct users on how to do that. Unfortunately, it is apparently a diff --git a/docs/Features/ForwardRework.md b/docs/Features/ForwardRework.md index 32e57a81..c6c46f1b 100644 --- a/docs/Features/ForwardRework.md +++ b/docs/Features/ForwardRework.md @@ -29,13 +29,13 @@ And protocols we have a client implementation of: - GOPHER - FTP -Theres a [patch](https://bugs.squid-cache.org/show_bug.cgi?id=1763) to +There is a [patch](https://bugs.squid-cache.org/show_bug.cgi?id=1763) to break out the server implementations - HTTPS, HTTP, ICP, HTCP. This possibly needs more work to be really polished, and is slated for 3.1. Some work has been done on breaking out the protocols we can have in a request into a single clean set of classes, making it modular, but its -not finished - and probably cant be until the protocols we implement +not finished - and probably cannot be until the protocols we implement clients of, and the connection between having a request object and actually handing it off to an external server, are decoupled. diff --git a/docs/Features/HTTP2.md b/docs/Features/HTTP2.md index da9ee15e..48deb09b 100644 --- a/docs/Features/HTTP2.md +++ b/docs/Features/HTTP2.md @@ -17,7 +17,7 @@ categories: WantedFeature # Details -HTTP/2 was designed loosly based on the SPDY experimental protocol for +HTTP/2 was designed loosely based on the SPDY experimental protocol for framing HTTP requests in a multiplexed fashion over SSL connections. Avoiding the pipeline issues which HTTP has with its dependency on stateful "\\r\\n" frame boundaries. diff --git a/docs/Features/HelperPause.md b/docs/Features/HelperPause.md index c4e22505..9dd8e0b8 100644 --- a/docs/Features/HelperPause.md +++ b/docs/Features/HelperPause.md @@ -16,7 +16,7 @@ some sort of "pause" message back to squid to signal that that child is temporarily unavailable for new queries, and then a "ready" message when it's available again. (yes, this is kinda obscure - the issue here is a single-threaded rewriter helper app that occasionally has to re-read its -rules database, and can't answer queries while it's doing so) +rules database, and cannot answer queries while it's doing so) It is not clear whether expanding redirector API is the right direction. It could be argued that folks that need non-basic adaptors should use diff --git a/docs/Features/HotConf.md b/docs/Features/HotConf.md index 4e2e8d76..1dd286fa 100644 --- a/docs/Features/HotConf.md +++ b/docs/Features/HotConf.md @@ -555,7 +555,7 @@ I'm saying: buffer to the right component processor unit then move on to the next line. - configure is a process of three function calls pre/configure/post to - warn the component whats about to happen. What gets done is not + warn the component what is about to happen. What gets done is not relevant to the lower layer. I also took a look at breaking the line down into generic tokens and @@ -576,7 +576,7 @@ has? create a new one, Parse into it, and pass it back to the component? in order to _save_ complexity? I don't believe I have to mention any of the problems associated with that to you. -IMO thats _way_ more complexity and trouble than simply passing +IMO that's _way_ more complexity and trouble than simply passing squid.conf buffers to the component. You could I suppose go the way of having pre-configure method/function return a void\* that gets passed back. @@ -588,7 +588,7 @@ component if it even needs them. Consider the third-party black-box component Widget dynamically loaded last configure time. In order to parse the widget_magic lines which part of the upper layer (squid) and lower-layer (component library) -whats the minimum transfer of information and call complexity we can do? +what is the minimum transfer of information and call complexity we can do? ``` squid: 'about to reconfigure' @@ -767,7 +767,7 @@ module.* d. *Created Config objects are assembled into a Squid Config object. Let's ignore how that is done and by whom.* - - Another vital assumption we can't just ignore. + - Another vital assumption we cannot just ignore. - IMO unnecessary as stated. diff --git a/docs/Features/IPv6.md b/docs/Features/IPv6.md index d101ab12..a73f024d 100644 --- a/docs/Features/IPv6.md +++ b/docs/Features/IPv6.md @@ -82,7 +82,7 @@ The only points of possible interest for some will be: ## Trouble Shooting IPv6 -### Squid builds with IPv6 but it won't listen for IPv6 requests. +### Squid builds with IPv6 but it will not listen for IPv6 requests. **Your squid may be configured to only listen for IPv4.** @@ -243,7 +243,7 @@ Example creation in squid.conf: acl to_ipv6 dst ipv6 acl from_ipv6 src ipv6 -## Why can't I connect to my localhost peers? +## Why cannot I connect to my localhost peers? In modern IPv6-enabled systems the special **localhost** name has at least two IP addresses. IPv4 (127.0.0.1) and IPv6 (::1). @@ -263,7 +263,7 @@ localhost until you can IPv6-enable the peers. ## So what gets broken by IPv6? -Also, a few features can't be used with IPv6 addresses. IPv4 traffic +Also, a few features cannot be used with IPv6 addresses. IPv4 traffic going through Squid is unaffected by this. Particularly traffic from IPv4 clients. However they need to be noted. @@ -288,7 +288,7 @@ around 2010 with the introduction of NAT66 and NPT66. Squid delay pools are still linked to class-B and class-C networking (from pre-1995 Internet design). Until that gets modernized the -address-based pool classes can't apply to IPv6 address sizes. +address-based pool classes cannot apply to IPv6 address sizes. The pools that should still work are the Squid-3 username based pool, or tag based pool. diff --git a/docs/Features/LinuxOptimizedIO.md b/docs/Features/LinuxOptimizedIO.md index 6d1ae28c..94469ff1 100644 --- a/docs/Features/LinuxOptimizedIO.md +++ b/docs/Features/LinuxOptimizedIO.md @@ -15,7 +15,7 @@ involving pipes: splice, tee and vmsplice pipe Those ***might*** be useful in different cases: respectively disk cache -hit, cacheable miss and (probably) error pages. We need to verify that +hit, cachable miss and (probably) error pages. We need to verify that the semantics are right, and what kind of compromises are required to implement them diff --git a/docs/Features/LogFormat.md b/docs/Features/LogFormat.md index fb3d94b9..41a51d85 100644 --- a/docs/Features/LogFormat.md +++ b/docs/Features/LogFormat.md @@ -66,7 +66,7 @@ format line for native *access.log* entries looks like this: "%9d.%03d %6d %s %s/%03d %d %s %s %s %s%s/%s %s" Therefore, an *access.log* entry usually consists of (at least) 10 -columns separated by one ore more spaces: +columns separated by one or more spaces: 1. **time** A Unix timestamp as UTC seconds with a millisecond resolution. This is the time when Squid started to log the diff --git a/docs/Features/MemPools.md b/docs/Features/MemPools.md index 35c02e08..135f294c 100644 --- a/docs/Features/MemPools.md +++ b/docs/Features/MemPools.md @@ -25,7 +25,7 @@ converted from C functions to static members of a C++ class. This leaves some issues open, such as initialization order. Also, with the current advancements in malloc implementations one may -want to link Squid against an alternaive malloc implementation: +want to link Squid against an alternative malloc implementation: - [Google tcmalloc](https://github.com/google/tcmalloc) - [Wolfram Gloger's ptmalloc3](http://www.malloc.de/en/) @@ -58,5 +58,5 @@ and followed by an empty line then the 'public:' section definition. ``` Classes which use the CBDATA_CLASS macro **must not** also use -MEMPROXY_CLASS. That includes use in the direct line of inheritence +MEMPROXY_CLASS. That includes use in the direct line of inheritance within a class hierarchy. diff --git a/docs/Features/MultipleUnlinkdQueues.md b/docs/Features/MultipleUnlinkdQueues.md index 4534ec7f..f4d650ab 100644 --- a/docs/Features/MultipleUnlinkdQueues.md +++ b/docs/Features/MultipleUnlinkdQueues.md @@ -4,7 +4,7 @@ categories: WantedFeature # Feature: Per-Store Unlinkd Queues - **Goal**: Currently there is a single global unlinkd queue. It's - possibile that a slow disk fills it up, causing other store_dirs to + possible that a slow disk fills it up, causing other store_dirs to starve and thus not free disk storage fast enough. - **Status**: *Not started* - **ETA**: *unknown* diff --git a/docs/Features/NewLogging.md b/docs/Features/NewLogging.md index ab3e3bc9..eecb6e58 100644 --- a/docs/Features/NewLogging.md +++ b/docs/Features/NewLogging.md @@ -7,7 +7,7 @@ categories: WantedFeature The Squid logging stuff isn't: - fast enough - stdio, same execution thread as Squid -- flexible enough - only can write to a file; can't write over the +- flexible enough - only can write to a file; cannot write over the network, to MySQL, etc. The aim of this is to enumerate a replacement logging facility for Squid @@ -45,7 +45,7 @@ which will be fast and flexible. fixed multiple of the page size and hope the application malloc() doesn't recycle those pages too quickly. Grr\! - Still, even at 10,000 req/sec with an average logging line length of - 160 characters thats 1.52 megabytes a second of data to copy; not + 160 characters that's 1.52 megabytes a second of data to copy; not exactly a huge amount for modern machines. - It shouldn't bother trying to enumerate the logging entries at all in the first pass. Just have them formatted in Squid and sent over @@ -67,7 +67,7 @@ which will be fast and flexible. The most efficient method would be to bunch the logfile lines up into a big chunk that can be written all at once to disk or the UDP socket but, to be honest, people will probably like having each line - seperately enumerated. + separately enumerated. ## Implementation details @@ -97,7 +97,7 @@ which will be fast and flexible. socket please." - Grab the Wikipedia patch which does logging over UDP and massage it into this framework -- Anthing else? +- Anything else? ## Version 2? diff --git a/docs/Features/NewServerSide.md b/docs/Features/NewServerSide.md index 623f8c3e..f284047d 100644 --- a/docs/Features/NewServerSide.md +++ b/docs/Features/NewServerSide.md @@ -23,7 +23,7 @@ servers/peers. end servers/peers. - HTTP/HTTPS CONNECT style connections -## What won't it do? +## What will not it do? - Handle content/transfer encodings (eg gzip/deflate) - Handle any of the cache logic whatsoever @@ -73,7 +73,7 @@ connection pools. A few ideas: if we really need to migrate stuff.) - Implement multiple threads for handling client and server events; the majority of connections (normal, pinned) will be inside a given - thread and so won't need to involve thread locking to queue stuff. + thread and so will not need to involve thread locking to queue stuff. Persistent connections could be managed as above to limit thread locking overhead or, well, we could just lock the persistent connection set. diff --git a/docs/Features/NoCentralStoreIndex.md b/docs/Features/NoCentralStoreIndex.md index 8e570a87..3396cfbd 100644 --- a/docs/Features/NoCentralStoreIndex.md +++ b/docs/Features/NoCentralStoreIndex.md @@ -20,7 +20,7 @@ The big single in-memory store index is starting to become quite a burden. There is a need for something which scales better with both size and CPU. -We need to move away from this, providing an asyncronous store lookup +We need to move away from this, providing an asynchronous store lookup mechanism allowing the index to be moved out from the core and down to the store layer. Ultimately even supporting shared stores used by multiple Squid frontends. diff --git a/docs/Features/Optimizations.md b/docs/Features/Optimizations.md index a07fa688..623a4f5e 100644 --- a/docs/Features/Optimizations.md +++ b/docs/Features/Optimizations.md @@ -15,10 +15,10 @@ nnn Get rid of some unneeded duplicate copying of information -- There's a copy from the http.c server-side code (via storeAppend()) +- There is a copy from the http.c server-side code (via storeAppend()) to the client_side.c client-side code (via storeClientCopy()) - in progress in s27_adri branch. -- There's a copy out from the store memory into the client-side layer +- There is a copy out from the store memory into the client-side layer (via storeClientCopy()) - integrated into Squid-2.HEAD ## Optimise the hard parts @@ -29,7 +29,7 @@ Get rid of some unneeded duplicate copying of information ## Implement scatter-gather IO Avoid having to use the packer to pack HTTP request/reply and headers -into a buffer before write()ing to the network-side; this won't really +into a buffer before write()ing to the network-side; this will not really be worth it until the copies are eliminated (above) and the [stackable IO model](/Features/StackableIO) is in. diff --git a/docs/Features/PartialResponsesCaching.md b/docs/Features/PartialResponsesCaching.md index 4b339433..8a882683 100644 --- a/docs/Features/PartialResponsesCaching.md +++ b/docs/Features/PartialResponsesCaching.md @@ -20,7 +20,7 @@ categories: WantedFeature (from the bug report): When range_offset_limit is set to -1, Squid tries to fetch the entire object in response to an HTTP range request. -- **Bug**: The entire object is fetched even when it is not cacheable +- **Bug**: The entire object is fetched even when it is not cachable (e.g. because it is larger than [maximum_object_size](http://www.squid-cache.org/Doc/config/maximum_object_size) or some other criteria). @@ -35,7 +35,7 @@ tries to fetch the entire object in response to an HTTP range request. file. The proper fix for this is to add caching of partial responses, -eleminating the need for +eliminating the need for [range_offset_limit](http://www.squid-cache.org/Doc/config/range_offset_limit) entirely. @@ -63,7 +63,7 @@ What Squid should do is: 1. skip the 1024-2048 chunk 1. fetch the 2049-3072 chunk -1. optionally skip the 3073+ chunks, or contiue fetching. +1. optionally skip the 3073+ chunks, or continue fetching. Caching the chunks received and marking the response as incomplete. diff --git a/docs/Features/PerformanceMeasure.md b/docs/Features/PerformanceMeasure.md index 022be87a..70376b68 100644 --- a/docs/Features/PerformanceMeasure.md +++ b/docs/Features/PerformanceMeasure.md @@ -59,7 +59,7 @@ When processed the dataset should contain: - a script listing the reply objects and size. Headers would need to be packaged with the benchmark bundle, bodies can be generated on setup from this file. -- a directory heirarchy of request and reply headers. +- a directory hierarchy of request and reply headers. ## Scripts diff --git a/docs/Features/Redirectors.md b/docs/Features/Redirectors.md index aebc0e0b..7db41554 100644 --- a/docs/Features/Redirectors.md +++ b/docs/Features/Redirectors.md @@ -284,10 +284,10 @@ header) you have to use a helper. The server doing this is very likely also to be using these private URLs -in things like cookies or embeded page content. There is nothing Squid +in things like cookies or embedded page content. There is nothing Squid can do about those. And worse they may not be reported by your visitors in any way indicating it is the re-writer. A browser-specific **my login -won't work** is just one popular example of the cookie side-effect. +will not work** is just one popular example of the cookie side-effect. ### Can I use something other than perl? diff --git a/docs/Features/SimpleWebServer.md b/docs/Features/SimpleWebServer.md index 0426b0c8..f8bdcd36 100644 --- a/docs/Features/SimpleWebServer.md +++ b/docs/Features/SimpleWebServer.md @@ -1,7 +1,7 @@ --- categories: WantedFeature --- -# Feature: Add simple Web Serving capabilites? +# Feature: Add simple Web Serving capabilities? - **Goal**: add simple webserving capabilities for generic content - **Status**: not started @@ -13,7 +13,7 @@ categories: WantedFeature # Details -Squid already has simple web-serving capabilites, e.g. error-pages, +Squid already has simple web-serving capabilities, e.g. error-pages, icons, etc. They are mostly hard-coded and served from memory. It would make sense to add simple webserving capabilities for generic content, and fold this static content back in. diff --git a/docs/Features/Snmp.md b/docs/Features/Snmp.md index f63945ee..f15a92bb 100644 --- a/docs/Features/Snmp.md +++ b/docs/Features/Snmp.md @@ -278,7 +278,7 @@ There are a lot of things you can do with SNMP and Squid. It can be useful in some extent for a longer term overview of how your proxy is doing. It can also be used as a problem solver. For example: how is it going with your filedescriptor usage? or how much does your LRU vary -along a day. Things you can't monitor very well normally, aside from +along a day. Things you cannot monitor very well normally, aside from clicking at the cachemgr frequently. Why not let MRTG do it for you? ## How can I use SNMP with Squid? diff --git a/docs/Features/SquidAppliance.md b/docs/Features/SquidAppliance.md index 69a60d46..d58941f8 100644 --- a/docs/Features/SquidAppliance.md +++ b/docs/Features/SquidAppliance.md @@ -89,7 +89,7 @@ and restoring configurations very strait forward. ### GUI -I can't see any practical reason to have X installed on the system. Is +I cannot see any practical reason to have X installed on the system. Is there anything on a proxy like this that would benefit from X? Some simple after the fact configuration and system information could be diff --git a/docs/Features/SslBump.md b/docs/Features/SslBump.md index db3ed41a..71c4335d 100644 --- a/docs/Features/SslBump.md +++ b/docs/Features/SslBump.md @@ -125,7 +125,7 @@ name for a site differing from its public certificate name. **[Squid-3.3](/Releases/Squid-3.3) and later** The *server-first* bumping algorithm with [certificate -mimicing](/Features/MimicSslServerCert) +mimicking](/Features/MimicSslServerCert) allows Squid to transparently pass on these flaws to the client browser for a more accurate decision about safety to be made there. diff --git a/docs/Features/SslServerCertValidator.md b/docs/Features/SslServerCertValidator.md index ae4c5e98..1465e9a8 100644 --- a/docs/Features/SslServerCertValidator.md +++ b/docs/Features/SslServerCertValidator.md @@ -146,4 +146,4 @@ might be out of date or simply not configured correctly. We could add an `squid.conf` option to control whether the helper is consulted after an OpenSSL-detected error, but since such errors should be rare, the option will likely add overheads to the common case without bringing any -functionality advantages for the rare erronous case. +functionality advantages for the rare erroneous case. diff --git a/docs/Features/StorageStuff.md b/docs/Features/StorageStuff.md index 81f09569..95790a0b 100644 --- a/docs/Features/StorageStuff.md +++ b/docs/Features/StorageStuff.md @@ -67,17 +67,17 @@ The Squid storage manager does a bunch of things inefficiently. Namely: Replace this with an explicit "`GrabReply`" async routine which'll do said kicking (including reading object data from disk where appropriate) and return the reply status + headers, and any data - thats available. -- That should mean we can get rid of the seen_offset stuff. Thats + that's available. +- That should mean we can get rid of the seen_offset stuff. That's only ever used, as far as I can tell, when trying to parse the reply headers. -- Once thats happy (and its a significant amount of work\!), modify +- Once that's happy (and its a significant amount of work\!), modify the storeClientCopy() API again take an offset and return a (mem_node + offset + size) which will supply the data required. The offset is required because the mem_node may contain data which has already be seen; the size is required because the mem_node may not yet be filled. -- Once -thats- happy (and thats another large chunk of work right +- Once -that's- happy (and that's another large chunk of work right there\!) consider changing things to not need to keep seeking into the memory object. Instead we should just do it in two parts - a seek() type call to set the current position, then return pages. @@ -110,7 +110,7 @@ The Squid storage manager does a bunch of things inefficiently. Namely: - The client API should be presented as two streams of data. One stream with status line and parsed entity headers (hop-by-hop headers should be filtered at the protocol side), the other a sparse - octet stream. Sparse to suppor ranges. Maybe there should be a seek + octet stream. Sparse to support ranges. Maybe there should be a seek function as well, but not really needed with the intermediary layer taking care of ranges. - Store API should be similarly split on both read write. Here a seek diff --git a/docs/Features/StoreID/CollisionRisks.md b/docs/Features/StoreID/CollisionRisks.md index 6e1693f6..add79dba 100644 --- a/docs/Features/StoreID/CollisionRisks.md +++ b/docs/Features/StoreID/CollisionRisks.md @@ -114,7 +114,7 @@ matching the clean site with another porn site pattern. ## Bank/Trade A bank account page that is not up-to-date will make the poor users buy -something he can't really pay for. +something he cannot really pay for. Administrators choosing to ignore caching rules on images causes wrong captcha picture to be delivered to users. Entering the embedded text in diff --git a/docs/Features/StoreID/DB.md b/docs/Features/StoreID/DB.md index 54f17374..59b70d56 100644 --- a/docs/Features/StoreID/DB.md +++ b/docs/Features/StoreID/DB.md @@ -736,7 +736,7 @@ Fedora latest mirrors as at 2013-10-15. ### Special Example Pattern of main repo data of Fedora The next pattern is a strict but yet complex pattern of all the main -repodata that icludes files DB and other important stuff that can be +repodata that includes files DB and other important stuff that can be cached safely. This pattern is very wide range so use it carefully. diff --git a/docs/Features/StoreID/Helper/Golang-2-api.md b/docs/Features/StoreID/Helper/Golang-2-api.md index 652a2f64..662e8c8a 100644 --- a/docs/Features/StoreID/Helper/Golang-2-api.md +++ b/docs/Features/StoreID/Helper/Golang-2-api.md @@ -59,7 +59,7 @@ func process_request(line string, wg *sync.WaitGroup) { lparts := strings.Split(strings.TrimRight(line, "\n"), " ") if len(lparts[0]) > 0 { if *debug { - fmt.Fprintln(os.Stderr, "ERRlog: Proccessing request => \""+strings.TrimRight(line, "\n")+"\"") + fmt.Fprintln(os.Stderr, "ERRlog: Processing request => \""+strings.TrimRight(line, "\n")+"\"") } } diff --git a/docs/Features/StoreID/Helper/Golang.md b/docs/Features/StoreID/Helper/Golang.md index 0f873c1b..24bc22da 100644 --- a/docs/Features/StoreID/Helper/Golang.md +++ b/docs/Features/StoreID/Helper/Golang.md @@ -39,7 +39,7 @@ func process_request(line string, re [256]*regexp.Regexp) { lparts := strings.Split(strings.TrimRight(line, "\n"), " ") if len(lparts[0]) > 0 { if *debug == "yes" { - fmt.Fprintln(os.Stderr, "ERRlog: Proccessing request => \""+strings.TrimRight(line, "\n")+"\"") + fmt.Fprintln(os.Stderr, "ERRlog: Processing request => \""+strings.TrimRight(line, "\n")+"\"") } } res := re[0].FindAllStringSubmatch(lparts[1] ,-1) diff --git a/docs/Features/StoreUrlRewrite.md b/docs/Features/StoreUrlRewrite.md index c2a51c7a..efea1fcd 100644 --- a/docs/Features/StoreUrlRewrite.md +++ b/docs/Features/StoreUrlRewrite.md @@ -22,7 +22,7 @@ categories: Feature My main focus with this feature is to support caching various CDN-supplied content which maps the same resource/content to multiple -locations. Initially I'm targetting Google content - Google Earth, +locations. Initially I'm targeting Google content - Google Earth, Google Maps, Google Video, Youtube - but the same technique can be used to cache similar content from CDNs such as Akamai (think "Microsoft Updates".) @@ -37,14 +37,14 @@ a number of structural changes: - An external helper (exactly the same data format is used as a redirect helper\!) receives URLs and can rewrite them to a canonical form - these rewritten URLs are stored as "store_url" URLs, - seperate from the normal URL; + separate from the normal URL; - The existing/normal URLs are used for ACL and forwarding - The "store_url" URLs are used for the store key lookup and storage - A new meta type has been added - STORE_META_STOREURL - which means - the on-disk object format has slightly changed. There's no big deal + the on-disk object format has slightly changed. There is no big deal here - Squid may warn about an unknown meta data type if you rollback to another squid version after trying this feature but it - won't affect the operation of your cache. + will not affect the operation of your cache. ## Squid Configuration @@ -88,7 +88,7 @@ section. refresh_pattern . 0 20% 4320 These rules make sure that you don't try caching cgi-bin and ? URLs -unless expiry information is explictly given. Make sure you don't add +unless expiry information is explicitly given. Make sure you don't add the rules after a "refresh_pattern ." line; refresh_pattern entries are evaluated in order and the first match is used\! The last entry must be the "." entry! diff --git a/docs/Features/StringNg.md b/docs/Features/StringNg.md index 5284feeb..53f80a1a 100644 --- a/docs/Features/StringNg.md +++ b/docs/Features/StringNg.md @@ -32,7 +32,7 @@ is the only holder of the MemBlob. Memory Manager friendliness can be obtained by tuning the allocation strategies for MemBlobs. Current practices are: heuristics are used to define how much extra space to allocate. Burden is split between -SBuf and MemBlob: the former former uses SBuf-local informations +SBuf and MemBlob: the former former uses SBuf-local information (e.g. the length of the SBuf lifetime expressed in number of copy operations), while MemBlob handles lower-level optimizations. diff --git a/docs/Features/Surrogate.md b/docs/Features/Surrogate.md index 0202d736..87fc4bdf 100644 --- a/docs/Features/Surrogate.md +++ b/docs/Features/Surrogate.md @@ -78,7 +78,7 @@ or maybe The web server or application must be capable of receiving the **Surrogate-Capability** headers and identifying whether the ID is -acceptible. +acceptable. > :x: Special care may be needed. The ID tags "unset-id" , "unconfigured" diff --git a/docs/Features/TCPAccess.md b/docs/Features/TCPAccess.md index 61aa7471..7bfc7ddc 100644 --- a/docs/Features/TCPAccess.md +++ b/docs/Features/TCPAccess.md @@ -17,7 +17,7 @@ categories: WantedFeature ## Details This is a proposal for a new tcp_access directive, to be executed -immediately when a new connection is accepted, before reading any HTPT +immediately when a new connection is accepted, before reading any HTTP request. As no HTTP data is yet available it's limited to src, myport, myaddr, time and maxconn type acls, maybe one or two more. diff --git a/docs/Features/Tproxy4.md b/docs/Features/Tproxy4.md index 4379c0ef..8bb1b9fc 100644 --- a/docs/Features/Tproxy4.md +++ b/docs/Features/Tproxy4.md @@ -402,7 +402,7 @@ router which passes packets to Squid. Then you will need to explicitly add some additional configuration. The WCCPv2 example is provided for people using Cisco boxes. For others -we can't point to exact routing configuration since it will depend on +we cannot point to exact routing configuration since it will depend on your router. But you will need to figure out some rule(s) which identify the Squid outbound traffic. Dedicated router interface, service groups, TOS set by Squid @@ -470,7 +470,7 @@ time it resolves to x.x.x.2 ## selinux policy denials When configuring TPROXY support on Fedora 12 using the Squid shipped -with Fedora selinux initially blocked Squid from usng the TPROXY +with Fedora selinux initially blocked Squid from using the TPROXY feature. The quick fix is disabling selinux entirely, but this is not generally diff --git a/docs/Features/UrnSupportRemoval.md b/docs/Features/UrnSupportRemoval.md index f9c679b3..404ab6b9 100644 --- a/docs/Features/UrnSupportRemoval.md +++ b/docs/Features/UrnSupportRemoval.md @@ -8,7 +8,7 @@ categories: Feature accepted by modern web browsers. In addition, most of the currently hardcoded practices can be easily achieved using different means, such as redirectors. Aim of the Feature is removing URN support - code, except for what's needed to successfully parse URNs + code, except for what is needed to successfully parse URNs - **Status**: *Not started* - **ETA**: *unknown* - **Version**: 3.2 diff --git a/docs/FrancescoChemolli.md b/docs/FrancescoChemolli.md index 1d4ed518..cef4856d 100644 --- a/docs/FrancescoChemolli.md +++ b/docs/FrancescoChemolli.md @@ -4,7 +4,7 @@ categories: Developer # Francesco Chemolli I'm involved with squid development since 2000, I've been working mainly -in the autentication area and on integration with Microsoft windows +in the authentication area and on integration with Microsoft windows authentication systems. Besides this, I'm a general nag-person for the other developers and I'm diff --git a/docs/Http11Checklist.md b/docs/Http11Checklist.md index a51abd65..3408c1c0 100644 --- a/docs/Http11Checklist.md +++ b/docs/Http11Checklist.md @@ -68,10 +68,10 @@ categories: [WantedFeature, Feature] | :rage: | :rage: | 45 | MUST | 3.7.1 | represent entity-bodies in canonical media-type form (except "text" types). | | | :rage: | :rage: | 46 | MUST | 3.7.1 | represent entity-bodies in canonical media-type form (except "text" types) prior to content-coding them | | | | | 47 | MUST | 3.7.1 | label data in charsets other than ISO-8859-1" with an appropriate charset value. | see rfc 2616 section 3.4.1 for compatibility notes | -| | | 48 | MUST | 3.7.2 | include a boundary parameter as part of the media type value for multipart media types | hno - squid does not generate or touch mulipart entries. Rbc - we may need to with TE content. Ermm, | -| | | 49 | MUST | 3.7.2 | only use CRLF in multipart messages to represent line breaks between body-parts. | hno - squid does not generate or touch mulipart entries. Rbc - we may need to with TE content. Ermm, | -| | | 50 | MUST | 3.7.2 | have the epilogue of multipart messages empty | hno - squid does not generate or touch mulipart entries. Rbc - we may need to with TE content. Ermm, | -| | | 51 | MUST NOT | 3.7.2 | transmit the epligoue of multipart messages (even if given one) | hno - squid does not generate or touch mulipart entries. Rbc - we may need to with TE content. Ermm, | +| | | 48 | MUST | 3.7.2 | include a boundary parameter as part of the media type value for multipart media types | hno - squid does not generate or touch multipart entries. Rbc - we may need to with TE content. Ermm, | +| | | 49 | MUST | 3.7.2 | only use CRLF in multipart messages to represent line breaks between body-parts. | hno - squid does not generate or touch multipart entries. Rbc - we may need to with TE content. Ermm, | +| | | 50 | MUST | 3.7.2 | have the epilogue of multipart messages empty | hno - squid does not generate or touch multipart entries. Rbc - we may need to with TE content. Ermm, | +| | | 51 | MUST NOT | 3.7.2 | transmit the epligoue of multipart messages (even if given one) | hno - squid does not generate or touch multipart entries. Rbc - we may need to with TE content. Ermm, | | | | 52 | SHOULD | 3.7.2 | follow the same behaviour as a MIME agent when receiving a multipart message-body. | | | | | 53 | MUST | 3.7.2 | treat unrecognized multipart subtypes as "multipart/mixed" | see rfc 1867 for multipart/form-data definition | | | | 54 | SHOULD | 3.8 | use short to the point product-tokens | | @@ -115,9 +115,9 @@ categories: [WantedFeature, Feature] | | | 90 | SHOULD | 4.4 | return 400 bad request if it cannot determine the length of a request message or 411 if we wish to enforce receiving a valid content-length | | | | | 91 | MUST NOT | 4.4 | include both Content-Length and a non-identity transfer coding. | | | | | 92 | MUST | 4.4 | ignore the Content-Length header if a non-identity Transfer-Encoding is received. (perhaps covering for TE instead of Transfer-Encoding??) | | -| | | 93 | MUST | 4.4 | IF we are acting like a user agent - ie 'client' - notify the user when an invalid length is received and detected - ie Content-Length does not match the number of octects in the message-body. | | -| | | 94 | MUST | 4.4 | when sending a response where a message body is allowed and we include Content-Length, it's value must match the number of OCTECTS in the message-body | | -| | | 95 | MUST | 4.5 | treat unrecognized header fields as enitity header fields | | +| | | 93 | MUST | 4.4 | IF we are acting like a user agent - ie 'client' - notify the user when an invalid length is received and detected - ie Content-Length does not match the number of octets in the message-body. | | +| | | 94 | MUST | 4.4 | when sending a response where a message body is allowed and we include Content-Length, it's value must match the number of OCTETS in the message-body | | +| | | 95 | MUST | 4.5 | treat unrecognized header fields as entity header fields | | | | | 96 | SHOULD | 5.1.1 | return 405 if a request method is recognized but not allowed | | | | | 97 | SHOULD | 5.1.1 | return 501 if a request method is not implemented | | | | | 98 | MUST | 5.1.1 | support GET and HEAD for squid generated pages | | @@ -211,7 +211,7 @@ categories: [WantedFeature, Feature] | | | 186 | MUST NOT | 9.2 | forward a request with a Max-Forwards field when squid receives an OPTIONS request on an absoluteURI for which request forwarding is permitted and the value of Max-Forwards is 0. | | | | | 187 | SHOULD | 9.2 | response with Squids communications options to an OPTIONS request with a Max-Forwards field on an absoluteURI for which request forwarding is permitted and the value of Max-Forwards is 0. | | | | | 188 | MUST | 9.2 | decrement the Max-Forwards field-value when forwarding an OPTIONS request with a Max-Forwards field on an absoluteURI for which request forwarding is permitted and the value of Max-Forwards is a non zero integer. | | -| | | 189 | MUST NOT | 9.2 | add a Max-Forwards header to an OPTIONS request if non is present when squid recieves it | | +| | | 189 | MUST NOT | 9.2 | add a Max-Forwards header to an OPTIONS request if non is present when squid receives it | | | | | 190 | MUST NOT | 9.3 | cache a GET response if it does not meet the HTTP caching requirements from rfc 2616 section 13 | | | | | 191 | MUST NOT | 9.4 | for the squid 'server' - generate a message-body in HEAD responses | | | | | 192 | SHOULD | 9.4 | for the squid 'server' - generate identical http headers for a HEAD request to the equivalent GET request. | | @@ -266,23 +266,23 @@ categories: [WantedFeature, Feature] | | | 241 | SHOULD | 10.4.14 | When returning a 413 error when the request entity is too large and it is a time based (or temporary) restriction, include a Retry-After header indicating when it should be ok | | | | | 242 | SHOULD | 10.4.18 | return 417 when we have unambiguous evidence that the expectation given in a request can not be met by the next hop server | | | :rage: | :rage: | 243 | SHOULD | 10.5 | include an entity body when we create 5xx error responses explaining the issue (other than to HEAD requests) | | -| | | 244 | SHOULD | 10.5.2 | return a 501 if we don't implement a given method and can't just proxy it an hope | | +| | | 244 | SHOULD | 10.5.2 | return a 501 if we don't implement a given method and cannot just proxy it an hope | | | | | 245 | SHOULD | 10.5.3 | return a 502 if we get an invalid upstream response | | | | | 246 | SHOULD | 10.5.4 | return a 503 if we are overloaded, or unable to serve requests due to maintenance. | | | | | 247 | MAY | 10.5.4 | return a Retry-After when returning a 503 if we are overloaded, or unable to serve requests due to maintenance. (the header would indicate when the maintenance should finish | | -| | | 248 | SHOULD | 10.5.5 | return a 504 on an upstream timeout, or timeout on an auxilary server - ie DNS/authentication helper | we may be returning 400 or 500 presently | +| | | 248 | SHOULD | 10.5.5 | return a 504 on an upstream timeout, or timeout on an auxiliary server - ie DNS/authentication helper | we may be returning 400 or 500 presently | | :frowning: | :rage: 3.1 | 249 | MUST | 10.5.6 | return a 505 if we don't support, (or have \#defed it out) the HTTP major version in the request message | | | :rage: | :rage: | 250 | OPTIONAL | 11 | implement basic and or digest authentication | | | :frowning: | :rage: | 251 | MAY | 12 | use content-negotiation on any entity body request/response - ie in selecting what language the error should be in | | | | | 252 | MAY | 12.1 | for the squid client - include request header fields (Accept, Accept-Language, Accept-Encoding etc) in requests | | | | | 253 | MAY | 12.3 | develop transparent negotiation capabilities within HTTP/1.1 | | -| | | 254 | recommendation | 13 | Note: The server, cache, or client implementor might be faced with design decisions not explicitly discussed in this specification. If a decision might affect semantic transparency, the implementor ought to err on the side of maintaining transparency unless a careful and complete analysis shows significant benefits in breaking transparency. | | +| | | 254 | recommendation | 13 | Note: The server, cache, or client implementer might be faced with design decisions not explicitly discussed in this specification. If a decision might affect semantic transparency, the implementer ought to err on the side of maintaining transparency unless a careful and complete analysis shows significant benefits in breaking transparency. | | | | | 255 | MUST | 13.1.1 | respond to a request with the most up-to-date response held by squid which is appropriate to the request (see 13.2.5,13.2.6,13.12) and meets one of : 1) it has been revalidated with the origin, 2) it is "fresh enough (see 13.12) & 14.9 or 3) it is an appropriate 304/305/ 4xx/5xx response | | | 2.7 | | 256 | MAY | 13.1.1 | If a stored response is not "fresh enough" by the most restrictive freshness requirement of both the client and the origin server, in carefully considered circumstances the cache MAY still return the response with the appropriate Warning header (see section 13.1.5 and 14.46), unless such a response is prohibited (e.g., by a "no-store" cache-directive, or by a "no-cache" cache-request-directive; see section 14.9). | | | | | 257 | SHOULD | 13.1.1 | forward received responses even if the response itself is stale without adding a new Warning header | | | | | 258 | SHOULD NOT | 13.1.1 | attempt to revalidate responses that become stale in transit to squid | | | | | 259 | SHOULD | 13.1.1 | respond as per the 13.1.1 respond rules even if the origin server cannot be contacted. | | -| | | 260 | MUST | 13.1.1 | return an error or warning to the client if the origin server can't be contacted, and no response can be served under the 13.1.1 rules | | +| | | 260 | MUST | 13.1.1 | return an error or warning to the client if the origin server cannot be contacted, and no response can be served under the 13.1.1 rules | | | | | 261 | MUST | 13.1.2 | attach a warning noting when returning a response that is neither first-hand nor "fresh enough" using the Warning header | | | | | 262 | MUST | 13.1.2 | delete 1xx warnings from cached responses after successful revalidation | | | | | 263 | MAY | 13.1.2 | generate 1xx warnings when validating a cached entry | | @@ -313,7 +313,7 @@ categories: [WantedFeature, Feature] | | | 288 | MUST NOT | 13.3.5 | use other headers than entity tags and Last-Modified for validation | | | | | 289 | MAY | 13.4 | always cache a successful response (unless constrained by 14.9) | | | | | 290 | MAY | 13.4 | return cached responses without validation while fresh (unless constrained by 14.9) | | -| | | 291 | MAY | 13.4 | return cached responses after succesful validation (unless constrained by 14.9) | | +| | | 291 | MAY | 13.4 | return cached responses after successful validation (unless constrained by 14.9) | | | | | 292 | MAY | 13.4 | cache responses with no validator or expiration time, but shouldn't do so in normal conditions | | | | | 293 | MAY | 13.4 | cache and use as replies, responses with status codes 200, 203, 206, 300, 301 or 410 (subject to expiration & cache-control mechanisms) | | | | | 294 | MUST NOT | 13.4 | return responses to status codes other than (200, 203, 206, 300, 301 or 410) in a reply to subsequent requests unless there are cache-control directives that explicitly allow it (eg Expires/ a max-age , s-maxage, must-revalidate, proxy-revalidate, puvlic or private cache-control header | | @@ -344,8 +344,8 @@ categories: [WantedFeature, Feature] | | | 319 | MUST NOT | 13.8 | return a partial response to a client without marking it as such (using 206 status code) | | | | | 320 | MUST NOT | 13.8 | return a partial response to a client with status 200 | | | | | 321 | MAY | 13.8 | forward 5xx responses received while revalidating entries to the client, or act as if the server failed to respond | | -| | | 322 | MAY | 13.8 | when a server fails to respond, return a cached response unless the cached entry inludes the must-revalidate cache-control directive | | -| | | 323 | MUST NOT | 13.9 | treat GET and HEAD requests with ? In the URI path as fresh UNLESS explicit exipration times are provided in the response | | +| | | 322 | MAY | 13.8 | when a server fails to respond, return a cached response unless the cached entry includes the must-revalidate cache-control directive | | +| | | 323 | MUST NOT | 13.9 | treat GET and HEAD requests with ? In the URI path as fresh UNLESS explicit expiration times are provided in the response | | | | | 324 | SHOULD NOT | 13.9 | cache GET and HEAD responses from HTTP/1.0 servers with ? In the URI path | | | | | 325 | MUST | 13.10 | invalidate entities referred to by the Content-Location header;Location header or the Request-URI in PUT/DELETE and POST requests. This is only done for the same host hwn using the Content-Locaiton and Location headers | | | | | 326 | SHOULD | 13.10 | invalidate entities referred to by the Request-URI in non understood methods if we pass them upstream | | @@ -376,7 +376,7 @@ categories: [WantedFeature, Feature] | | | 351 | MUST | 14.8 | always revalidate responses with cache-control: s-maxage=0 | | | | | 352 | MUST | 14.9 | follow the cache-control header directives at all times | | | | | 353 | MUST | 14.9 | pass cache-control directives through to the next link in the message path (ie don't eat them) | | -| | | 354 | MAY | 14.9.1 | cache responses with cache-control: public even of the header/method might not normally be cacheable | | +| | | 354 | MAY | 14.9.1 | cache responses with cache-control: public even of the header/method might not normally be cachable | | | | | 355 | MUST NOT | 14.9.1 | cache responses with cache-control: private | | | | | 356 | MUST NOT | 14.9.1 | use responses with cache-control: no-cache to satisfy other requests without successful revalidation | ie auto GET to IMS is allowed | | | | 357 | MAY | 14.9.1 | use responses with cache-control: no-cache to satisfy other requests without successful revalidation if the no-cache directive includes field-names | | @@ -384,7 +384,7 @@ categories: [WantedFeature, Feature] | | | 359 | MAY | 14.9.2 | use no-store on requests or responses to prevent data storage | | | | | 360 | MUST NOT | 14.9.2 | store any part of a request or it's response if the cache-control: no-store directive was in the request | This directive applies to both non-shared and shared caches. "MUST NOT store" in this context means that the cache MUST NOT intentionally store the information in non-volatile storage, and MUST make a best-effort attempt to remove the information from volatile storage as promptly as possible after forwarding it. | | | | 361 | MUST NOT | 14.9.2 | store any part of a response or the request that elicited it if the cache-control: no-store directive was in the response | This directive applies to both non-shared and shared caches. "MUST NOT store" in this context means that the cache MUST NOT intentionally store the information in non-volatile storage, and MUST make a best-effort attempt to remove the information from volatile storage as promptly as possible after forwarding it. | -| | | 362 | SHOULD | 14.9.3 | consider responses with an Expires value that is \<= the response date and no cache-control header field to be non-cacheable | | +| | | 362 | SHOULD | 14.9.3 | consider responses with an Expires value that is \<= the response date and no cache-control header field to be non-cachable | | | | | 363 | MUST | 14.9.3 | mark stale responses with Warning 110 | | | | | 364 | MAY | 14.9.3 | have squid configurable to return stale responses even when not requested by clients but responses served MUST NOT conlict with other MUST or MUST NOT requirements | | | | | 365 | MUST NOT | 14.9.4 | use a cached copy to respond to a request with cache-control: no-cache or Pragma: no-cache | | @@ -455,8 +455,8 @@ categories: [WantedFeature, Feature] | | | 430 | MUST | 14.42 | use the upgrade header in a response with status code 101 | | | | | 431 | MUST | 14.42 | include the Upgrade connection-token whenever we use the Upgrade header | | | | | 432 | SHOULD | 14.43 | for the client - include a user-Agent field in requests | | -| | | 433 | SHOULD | 14.44 | include a Vary header on any cacheable response we generate that used server negotiation | | -| | | 434 | MAY | 14.44 | for the 'server' include a vary header with a non-cacheable response the used server negotiation | | +| | | 433 | SHOULD | 14.44 | include a Vary header on any cachable response we generate that used server negotiation | | +| | | 434 | MAY | 14.44 | for the 'server' include a vary header with a non-cachable response the used server negotiation | | | | | 435 | MAY | 14.44 | assume the same response will be given by a server for future requests with the same request field values as those listed by the vary header in the response whilst the response is still fresh | | | | | 436 | MUST NOT | 14.44 | generate a \* value for a vary field | | | | | 437 | MUST | 14.45 | fill in the Via header | | diff --git a/docs/Internals/CommApi.md b/docs/Internals/CommApi.md index 7ffd176a..5eca98d8 100644 --- a/docs/Internals/CommApi.md +++ b/docs/Internals/CommApi.md @@ -104,4 +104,4 @@ CommSelectEngine::checkEvents() - If the average 'web' object size is still under 64k in size then we should be able to do all of that in a single write() (or writev()) without any copying. -- Whats the most optimal size to read/write? +- What is the most optimal size to read/write? diff --git a/docs/Internals/StoreAPI.md b/docs/Internals/StoreAPI.md index d190af9c..944a9d82 100644 --- a/docs/Internals/StoreAPI.md +++ b/docs/Internals/StoreAPI.md @@ -7,7 +7,7 @@ class StoreSearch : RefCountable virtual void next(void (callback)(void *cbdata), void *cbdata) = 0; /* return true if a new StoreEntry is immediately available */ virtual bool next() = 0; - /* has an error occured ? */ + /* has an error occurred ? */ virtual bool error() const = 0; /* are we at the end of the iterator ? */ virtual bool isDone() const = 0; diff --git a/docs/KnowledgeBase/Benchmarks.md b/docs/KnowledgeBase/Benchmarks.md index 1f0e3866..0614b8eb 100644 --- a/docs/KnowledgeBase/Benchmarks.md +++ b/docs/KnowledgeBase/Benchmarks.md @@ -45,7 +45,7 @@ This number was taken in a **controlled test environment**. It has nothing to do with the numbers someone would get in a production environment; it's just an estimate of how fast squid can be. Squid was configured to do no logging, no access control, and apachebench was used -to hammer squid asking 10M times for a static, cacheable, 600-bytes long +to hammer squid asking 10M times for a static, cachable, 600-bytes long document. Of the 4 cores, 3 were running a multi-worker squid, one was running ab over the loopback interface. @@ -229,7 +229,7 @@ nothing to do with the numbers someone would get in a production environment; it's just an estimate of how fast squid can be. Squid was configured to do no logging and apachebench was used to hammer squid asking 250K times for a blocked url (leading to a 403 response with a -location header) or with a cacheable, 16KB long document. Of the 4 +location header) or with a cachable, 16KB long document. Of the 4 cores, 2 were running a multi-worker squid. The apache benchmark was run from another host and from the same host with similar results. diff --git a/docs/KnowledgeBase/BrokenWindowSize.md b/docs/KnowledgeBase/BrokenWindowSize.md index bfeae3b5..0067c299 100644 --- a/docs/KnowledgeBase/BrokenWindowSize.md +++ b/docs/KnowledgeBase/BrokenWindowSize.md @@ -69,7 +69,7 @@ This isn't such a problem with desktops talking directly to servers because desktops typically have small window sizes and TCP scale factors configured and thus they tend not to be too far "out of whack" with what the server believes. Modern server operating systems tend to have larger -window sizes and TCP scale factors which tend to aggrivate the issue. +window sizes and TCP scale factors which tend to aggravate the issue. ## Workaround @@ -81,7 +81,7 @@ proxy server. Under Linux this is done by: Other platforms will implement it differently. Another possibility is to add in specific routes to target networks -which force a TCP window size maximum of 65535. This currently can't be +which force a TCP window size maximum of 65535. This currently cannot be done automatically by Squid. ## Thanks diff --git a/docs/KnowledgeBase/HierarchyControl.md b/docs/KnowledgeBase/HierarchyControl.md index 1a9157d0..4a3286b7 100644 --- a/docs/KnowledgeBase/HierarchyControl.md +++ b/docs/KnowledgeBase/HierarchyControl.md @@ -32,8 +32,8 @@ The various directives are evaluated in this order: The purpose of cache hierarchy is to maximize the chance of finding objects in siblings, so a set of heuristics is applied to try and -determine in advance whether an object is likely to be cacheable. A few -objects are **not** cacheable, and are thus **not** hierarchic. Those +determine in advance whether an object is likely to be cachable. A few +objects are **not** cachable, and are thus **not** hierarchic. Those are: - reload requests diff --git a/docs/KnowledgeBase/HostHeaderForgery.md b/docs/KnowledgeBase/HostHeaderForgery.md index 4e79f09e..21bfc2bf 100644 --- a/docs/KnowledgeBase/HostHeaderForgery.md +++ b/docs/KnowledgeBase/HostHeaderForgery.md @@ -145,7 +145,7 @@ work: can attempt to use EDNS to get larger packets with all IPs of these domains by setting the [dns_packet_max](http://www.squid-cache.org/Doc/config/dns_packet_max) - directive. This reduces Squids chance of loosing the IP the + directive. This reduces Squids chance of losing the IP the client is connecting to but requires both your resolver to support EDNS and network to support jumbograms * restrict HTTP persistent (keep-alive) connections diff --git a/docs/KnowledgeBase/LdapBackedDigestAuthentication.md b/docs/KnowledgeBase/LdapBackedDigestAuthentication.md index ef86fe53..e2696ac1 100644 --- a/docs/KnowledgeBase/LdapBackedDigestAuthentication.md +++ b/docs/KnowledgeBase/LdapBackedDigestAuthentication.md @@ -30,7 +30,7 @@ running, but is expected from who are reading this: To manipulate the attributes in LDAP was used the tools from the package ldap-utils (those beginning with ldap\* and used to manipulate the base -when running, pretty standard). Theres n ways to do that, feeding the +when running, pretty standard). There are n ways to do that, feeding the base with LDIF files, using administration tools with a web interface, these will not be shown here. LDAP can be populated in various different forms, so, it is expected that yours can be a little different than mine @@ -48,7 +48,7 @@ administrator is "cn=admin,dc=minharede,dc=lan" with a password **How the digest is calculated and what is expected to be in the base** The base needs to hold an attribute containing a pair, realm and H(A1) -separated by a separator like realm:H(A1) inside a distiguished name +separated by a separator like realm:H(A1) inside a distinguished name representing an user name. Where H(A1) is the digested value of username:realm:password. diff --git a/docs/KnowledgeBase/NTLMAuthGoryDetails.md b/docs/KnowledgeBase/NTLMAuthGoryDetails.md index 9e9ec5c9..6e69d894 100644 --- a/docs/KnowledgeBase/NTLMAuthGoryDetails.md +++ b/docs/KnowledgeBase/NTLMAuthGoryDetails.md @@ -27,7 +27,7 @@ is available from [Samba's](http://www.samba.org) repository. specified. Of course, additional Proxy-Authenticate headers might be supplied to announce other supported authentication schemes. There is a bug in all version of Microsoft Internet Explorer by which the - NTLM authentication scheme MUST be declared first or it won't be + NTLM authentication scheme MUST be declared first or it will not be selected. This goes against RFC 2616, which recites "A user agent MUST choose to use the strongest auth scheme it understands" and NTLM, while broken in many ways, is still worlds stronger than @@ -35,7 +35,7 @@ is available from [Samba's](http://www.samba.org) repository. 1. At this point, Squid disconnects the connection, forcing the client to initiate a new connection, regardless of any keep-alive directives from the client. This is a bug-compatibility issue. It - may not be required with HTTP/1.1, but there's no way to make sure. + may not be required with HTTP/1.1, but there is no way to make sure. 1. The client re connects and issues a GET-request, this time with an accompanying `Proxy-Authorization: NTLM some_more_stuff` header, where some_more_stuff is a base64-encoded negotiate packet. The @@ -50,7 +50,7 @@ is available from [Samba's](http://www.samba.org) repository. 1. The client sends a new GET-request, along with an header: `Proxy-Authenticate: NTLM cmon_we_are_almost_done` where cmon_we_are_almost_done is an authenticate packet. The packet - includes informations about the user name and domain, the challenge + includes information about the user name and domain, the challenge nonce encoded with the user's password (actually it MIGHT contain it encoded TWICE using different algorithms). 1. Either the server denies the authentication via a 407/DENIED or diff --git a/docs/KnowledgeBase/NTLMIssues.md b/docs/KnowledgeBase/NTLMIssues.md index 42572229..4230e87c 100644 --- a/docs/KnowledgeBase/NTLMIssues.md +++ b/docs/KnowledgeBase/NTLMIssues.md @@ -25,7 +25,7 @@ and the helper use) and the Squid NTLM authenticator protocol. Due to the way NTLM authentication over HTTP has been designed by Microsoft, each new TCP connection needs to be denied twice to perform -the authentication handshake. Then as long as it's kept alive it won't +the authentication handshake. Then as long as it's kept alive it will not need any further authentication. Yes, it breaks protocol layering. Yes, it breaks HTTP's statelessness. And yes, it wastes lots of bandwidth (two \~2kb denies for an average-sized 16k object means a whopping 20% diff --git a/docs/KnowledgeBase/NoForwardProxyPorts.md b/docs/KnowledgeBase/NoForwardProxyPorts.md index 2c936932..b482ced3 100644 --- a/docs/KnowledgeBase/NoForwardProxyPorts.md +++ b/docs/KnowledgeBase/NoForwardProxyPorts.md @@ -5,7 +5,7 @@ categories: KnowledgeBase ## Synopsis -Squid has been configuered without any port capable of receiving +Squid has been configured without any port capable of receiving forward-proxy traffic. ## Symptoms diff --git a/docs/KnowledgeBase/OptimalCossParameters.md b/docs/KnowledgeBase/OptimalCossParameters.md index f7dfe233..b34f9e0d 100644 --- a/docs/KnowledgeBase/OptimalCossParameters.md +++ b/docs/KnowledgeBase/OptimalCossParameters.md @@ -12,7 +12,7 @@ categories: KnowledgeBase [COSS](/Features/CyclicObjectStorageSystem) or Cyclic Object Storage System is the fastest disk storage method available to Squid. The [SquidFaq](/SquidFaq) -contains information about its configureable parameters, while here we +contains information about its configurable parameters, while here we want to focus on how to optimize those parameters for a typical proxying setup for maximum performance. diff --git a/docs/KnowledgeBase/PerformanceAnalysis.md b/docs/KnowledgeBase/PerformanceAnalysis.md index ac63375c..222144a4 100644 --- a/docs/KnowledgeBase/PerformanceAnalysis.md +++ b/docs/KnowledgeBase/PerformanceAnalysis.md @@ -10,7 +10,7 @@ source of performance information as it raises their performance expectations beyond what is reasonable in a work environment. The first step is to quantify and measure the problem, since users -almost always have a subjective (and not quantitative) view of what's +almost always have a subjective (and not quantitative) view of what is going on. 1. Try a simple test: on a test client system with enough network @@ -24,7 +24,7 @@ going on. squid restarting unexpectedly or complaining about some resource being unavailable (for instance, is it low on file descriptors?) 1. Check your uplink congestion rate. Is it congested? Squid can help - with a congested uplink, but can't perform miracles. What about + with a congested uplink, but cannot perform miracles. What about latencies? Do a traceroute to a test site and check what is the performance on the first two-three hops: the problem might not be with your uplink, but with your provider's. @@ -38,7 +38,7 @@ going on. using authentication check the authenticators' queues congestion - all these things add latency to a request handling, and that can generally make an user's browsing experience much worse. - Repeat the analisys at different times in different days, check for + Repeat the analysis at different times in different days, check for variations in the vital parameters. 1. Collect a few days' worth of logs, and run on them a statistics software such as calamaris or webalizer, and start looking for diff --git a/docs/KnowledgeBase/ProxyPacSlow.md b/docs/KnowledgeBase/ProxyPacSlow.md index f959b994..ec086775 100644 --- a/docs/KnowledgeBase/ProxyPacSlow.md +++ b/docs/KnowledgeBase/ProxyPacSlow.md @@ -29,7 +29,7 @@ network/netmask range. This requires the browser to perform a DNS lookup to map the given host to an IP address before it can attempt the match. -Some browsers will do a seperate DNS lookup for each `isInNet()` +Some browsers will do a separate DNS lookup for each `isInNet()` function call, resulting in a very long delay before finally completing the proxy lookup. This can result in slow or non-functional web browsing. diff --git a/docs/KnowledgeBase/TransparentProxySelectiveBypass.md b/docs/KnowledgeBase/TransparentProxySelectiveBypass.md index a517367d..3788db54 100644 --- a/docs/KnowledgeBase/TransparentProxySelectiveBypass.md +++ b/docs/KnowledgeBase/TransparentProxySelectiveBypass.md @@ -14,7 +14,7 @@ Yes, it is possible to bypass a Squid running as an interception proxy. Except for the fact that it's not up to squid to do it, but it's a task for the underlying interception technology. -Once Squid gets engaged to serve a request, it can't declare itself out +Once Squid gets engaged to serve a request, it cannot declare itself out of the game, but has to either service it or fail it. This requirement also determines what kind of filtering is possible; diff --git a/docs/KnowledgeBase/Ubuntu.md b/docs/KnowledgeBase/Ubuntu.md index d93ae9be..1a925db3 100644 --- a/docs/KnowledgeBase/Ubuntu.md +++ b/docs/KnowledgeBase/Ubuntu.md @@ -56,7 +56,7 @@ discover the dependency package and install it. ### Init Script -The init.d script is part of the official Debain/Ubuntu packaging. It +The init.d script is part of the official Debian/Ubuntu packaging. It does not come with Squid directly. So you will need to download a copy from the Debian repository to /etc/init.d/squid diff --git a/docs/KnowledgeBase/UnparseableHeader.md b/docs/KnowledgeBase/UnparseableHeader.md index ec67cecb..5ddc090a 100644 --- a/docs/KnowledgeBase/UnparseableHeader.md +++ b/docs/KnowledgeBase/UnparseableHeader.md @@ -46,7 +46,7 @@ some form of serialized data. ## Workaround -- Fix the software sending this header. if you cant do that yourself +- Fix the software sending this header. if you cannot do that yourself report it to the broken server or client software authors please. This used to be a big problem around 2005 but has become less common now that all the middleware proxies dump these requests. The authors diff --git a/docs/KnowledgeBase/WhatIsNumClients.md b/docs/KnowledgeBase/WhatIsNumClients.md index 9e9717bb..a49f4241 100644 --- a/docs/KnowledgeBase/WhatIsNumClients.md +++ b/docs/KnowledgeBase/WhatIsNumClients.md @@ -5,11 +5,11 @@ categories: KnowledgeBase In the [cache manager](/Features/CacheManager)'s "general runtime information" page, Squid specifies the number of -clients accesssing the cache; but WHAT it is is not really explained +clients accessing the cache; but WHAT it is is not really explained anywhere. Technically speaking, it's the size of the clients database, where Squid -records some informations about the clients haivng recently accessed its +records some information about the clients haivng recently accessed its services. **So what is a "Client"?** @@ -22,5 +22,5 @@ have **OR** - performed more than one HTTP or ICP request in th epast 5 minutes -This logic is hard-coded in the Squid source and at this time can't be +This logic is hard-coded in the Squid source and at this time cannot be changed. \ No newline at end of file diff --git a/docs/KnowledgeBase/Windows.md b/docs/KnowledgeBase/Windows.md index 103a43ec..08e84566 100644 --- a/docs/KnowledgeBase/Windows.md +++ b/docs/KnowledgeBase/Windows.md @@ -224,7 +224,7 @@ environments, and -devel version of libraries must be installed. Requires the latest packages from with GCC 8 or later compiler. > :warning: - This section needs re-writing. This environment has not successfuly built since [Squid-3.4](Releases/Squid-3.4). + This section needs re-writing. This environment has not successfully built since [Squid-3.4](Releases/Squid-3.4). In order to compile squid using the MinGW environment, the packages MSYS, MinGW and msysDTK must be installed. Some additional libraries and diff --git a/docs/NewClientSide.md b/docs/NewClientSide.md index be8214c1..54687b01 100644 --- a/docs/NewClientSide.md +++ b/docs/NewClientSide.md @@ -3,7 +3,7 @@ categories: WantedFeature --- # Another Client Side? -Or, "a new HTTP server side", as thats what it is. +Or, "a new HTTP server side", as that's what it is. A HTTP server side should implement the following: - Network connection management @@ -15,9 +15,9 @@ A HTTP server side should implement the following: What it might implement: - HTTP authentication? Or could that be implemented between the HTTP network server and the HTTP request queue? -- SSL. Thats a connection property. +- SSL. That's a connection property. -What it won't implement: +What it will not implement: - ACL checks: that should be done as part of the HTTP request queue - URL rewriting: that should be done as part of the HTTP request queue - Transfer/Content encoding (deflate/gzip); that should be done as @@ -53,7 +53,7 @@ What it won't implement: Its relatively easy to handle errors in a single-process non-threaded setup - just abort all the outstanding requests and delete the object -there and then. This probably won't cut it in a threaded setup, so: +there and then. This probably will not cut it in a threaded setup, so: - The connection closing shouldn't force the object to immediately disappear - it should go into a CLOSED state @@ -71,7 +71,7 @@ In theory the server connections should be self-contained; so multiple threads can run multiplexed server connections without any interthread locking needed. This might not be so true for certain 'things' (such as a shared HTTP authentication cache, DNS requests, etc) but these could -be seperate message queues. +be separate message queues. The trick is to keep the server side around long enough to receive all the queued messages it has or be able to cancel them. diff --git a/docs/ProgrammingGuide/Architecture.md b/docs/ProgrammingGuide/Architecture.md index 75e6c795..23a44098 100644 --- a/docs/ProgrammingGuide/Architecture.md +++ b/docs/ProgrammingGuide/Architecture.md @@ -10,7 +10,7 @@ applications there is no relationship between packets (a layer 3 concept) and the traffic received by Squid. Instead of packets HTTP operates on a **message** basis (called segments in the OSI model definitions), where an HTTP request and response can each be loosely -considered equivelent to one "packet" in a transport architecture. Just +considered equivalent to one "packet" in a transport architecture. Just like IP packets HTTP messages are stateless and the delivery is entirely optional for process. See the RFC [7230](https://tools.ietf.org/rfc/rfc7230) texts for a better diff --git a/docs/ProgrammingGuide/CacheMgrApi.md b/docs/ProgrammingGuide/CacheMgrApi.md index 8814b433..83b53dc2 100644 --- a/docs/ProgrammingGuide/CacheMgrApi.md +++ b/docs/ProgrammingGuide/CacheMgrApi.md @@ -6,7 +6,7 @@ This page is a work in progress. It reflects the discoveries by [FrancescoChemolli](/FrancescoChemolli) as it tries to implement the new cachemgr framework. It may contain - inaccurate informations. + inaccurate information. This document details how to implement a multi-cpu cache manager action for Squid 3.2+, following the API framework implemented by diff --git a/docs/ProgrammingGuide/CbData.md b/docs/ProgrammingGuide/CbData.md index 0436b9b5..6d0bc42c 100644 --- a/docs/ProgrammingGuide/CbData.md +++ b/docs/ProgrammingGuide/CbData.md @@ -117,7 +117,7 @@ Here you can find some examples on how to use cbdata, and why ### Asynchronous operation without cbdata, showing why cbdata is needed -For a asyncronous operation with callback functions, the normal sequence +For a asynchronous operation with callback functions, the normal sequence of events in programs NOT using cbdata is as follows: ```c++ /* initialization */ @@ -125,10 +125,10 @@ of events in programs NOT using cbdata is as follows: ... our_data = malloc(...); ... - /* Initiate a asyncronous operation, with our_data as callback_data */ + /* Initiate a asynchronous operation, with our_data as callback_data */ fooOperationStart(bar, callback_func, our_data); ... - /* The asyncronous operation completes and makes the callback */ + /* The asynchronous operation completes and makes the callback */ callback_func(callback_data, ....); /* Some time later we clean up our data */ free(our_data); @@ -144,7 +144,7 @@ the callback is invoked causing a program failure or memory corruption: ... our_data = malloc(...); ... - /* Initiate a asyncronous operation, with our_data as callback_data */ + /* Initiate a asynchronous operation, with our_data as callback_data */ fooOperationStart(bar, callback_func, our_data); ... /* ouch, something bad happened elsewhere.. try to cleanup @@ -155,18 +155,18 @@ the callback is invoked causing a program failure or memory corruption: */ free(our_data); ... - /* The asyncronous operation completes and makes the callback */ + /* The asynchronous operation completes and makes the callback */ callback_func(callback_data, ....); /* CRASH, the memory pointer to by callback_data is no longer valid * at the time of the callback */ ``` -### Asyncronous operation with cbdata +### Asynchronous operation with cbdata The callback data allocator lets us do this in a uniform and safe manner. The callback data allocator is used to allocate, track and free memory pool objects used during callback operations. Allocated memory is -locked while the asyncronous operation executes elsewhere, and is freed +locked while the asynchronous operation executes elsewhere, and is freed when the operation completes. The normal sequence of events is: ```c++ /* initialization */ @@ -174,13 +174,13 @@ when the operation completes. The normal sequence of events is: ... our_data = cbdataAlloc(type_of_data); ... - /* Initiate a asyncronous operation, with our_data as callback_data */ + /* Initiate a asynchronous operation, with our_data as callback_data */ fooOperationStart(..., callback_func, our_data); ... /* foo */ void *local_pointer = cbdataReference(callback_data); .... - /* The asyncronous operation completes and makes the callback */ + /* The asynchronous operation completes and makes the callback */ void *cbdata; if (cbdataReferenceValidDone(local_pointer, &cbdata)) callback_func(...., cbdata); @@ -197,7 +197,7 @@ fooOperantionComplete(...). ... our_data = cbdataAlloc(type_of_data); ... - /* Initiate a asyncronous operation, with our_data as callback_data */ + /* Initiate a asynchronous operation, with our_data as callback_data */ fooOperationStart(..., callback_func, our_data); ... /* foo */ @@ -206,10 +206,10 @@ fooOperantionComplete(...). /* something bad happened elsewhere.. cleanup */ cbdataFree(our_data); ... - /* The asyncronous operation completes and tries to make the callback */ + /* The asynchronous operation completes and tries to make the callback */ void *cbdata; if (cbdataReferenceValidDone(local_pointer, &cbdata)) - /* won't be called, as the data is no longer valid */ + /* will not be called, as the data is no longer valid */ callback_func(...., cbdata); ``` In this case, when `cbdataFree` is called before diff --git a/docs/ProgrammingGuide/ClientStreams.md b/docs/ProgrammingGuide/ClientStreams.md index 4f4897b7..df22efc2 100644 --- a/docs/ProgrammingGuide/ClientStreams.md +++ b/docs/ProgrammingGuide/ClientStreams.md @@ -44,7 +44,7 @@ clientSendMoreData to send data down the pipeline. client POST bodies do not use a pipeline currently, they use the previous code to send the data. This is a TODO when time permits. -## Whats in a node +## What is in a node Each node must have: @@ -133,6 +133,6 @@ Parameters: - clienthttpRequest \* - MUST NOT be NULL. -Side effects: Detachs the tail of the stream. CURRENTLY DOES NOT clean +Side effects: Detaches the tail of the stream. CURRENTLY DOES NOT clean up the tail node data - this must be done separately. Thus Abort may ONLY be called by the tail node. diff --git a/docs/ProgrammingGuide/DoxygenDocumentation.md b/docs/ProgrammingGuide/DoxygenDocumentation.md index e43750f4..ef72bcb7 100644 --- a/docs/ProgrammingGuide/DoxygenDocumentation.md +++ b/docs/ProgrammingGuide/DoxygenDocumentation.md @@ -100,7 +100,7 @@ or * * This function does activity Y, requiring blah blah blah. * End with an empty line. - * [\note particular informations] + * [\note particular information] * * \param paramName paramDesc * ... diff --git a/docs/ProgrammingGuide/LeakHunting.md b/docs/ProgrammingGuide/LeakHunting.md index ca9c66fc..039b117c 100644 --- a/docs/ProgrammingGuide/LeakHunting.md +++ b/docs/ProgrammingGuide/LeakHunting.md @@ -5,7 +5,7 @@ Memory management is a thorny issue in Squid. Its single-process nature makes it very important no to leak memory in any circumstance, as even a single leaked byte per request can grind a proxy to a halt in a few -hours of production useage. +hours of production usage. # Valgrind diff --git a/docs/ProgrammingGuide/LibraryAutoconf.md b/docs/ProgrammingGuide/LibraryAutoconf.md index 1f650599..7097e90c 100644 --- a/docs/ProgrammingGuide/LibraryAutoconf.md +++ b/docs/ProgrammingGuide/LibraryAutoconf.md @@ -16,7 +16,7 @@ absence and is a fatal error. * When the user specifies `--with-foo=PATH` the library shall be detected at the specified path. -* When the user specifies `--without-foo` no tests for the librarry +* When the user specifies `--without-foo` no tests for the library will be performed, nor will it be used by Squid. * When the library is absent API feature tests, hacks and workarounds for the library should not be searched for. This reduces the time diff --git a/docs/ProgrammingGuide/RequestQueues.md b/docs/ProgrammingGuide/RequestQueues.md index 2f6a9541..a57bd88e 100644 --- a/docs/ProgrammingGuide/RequestQueues.md +++ b/docs/ProgrammingGuide/RequestQueues.md @@ -39,7 +39,7 @@ queue. A queue runner will scan the pending request queue and decide what to do. In the case of a proxy it'll want to find or create a client to -satisfy the request. Once the request has been satisified somehow it'll +satisfy the request. Once the request has been satisfied somehow it'll be attached to a client, forming the other end of the data pipeline. The request moves to the "In progress HTTP request" queue and begins data exchange. The request is destroyed once both parties - client and server @@ -47,7 +47,7 @@ exchange. The request is destroyed once both parties - client and server ### Messages -The general exchange should involve simple messages. There's a handful +The general exchange should involve simple messages. There is a handful of message types: - A "request" message type - method, URL, version diff --git a/docs/ProgrammingGuide/StorageManager.md b/docs/ProgrammingGuide/StorageManager.md index 2bf21942..93f259fb 100644 --- a/docs/ProgrammingGuide/StorageManager.md +++ b/docs/ProgrammingGuide/StorageManager.md @@ -63,7 +63,7 @@ squid/src/fs/$type/ from a Makefile.in file. configure will take a list of storage types through the *--enable-store-io* parameter. This parameter takes a list of space -seperated storage types. For example, --enable-store-io="ufs coss" . +separated storage types. For example, --enable-store-io="ufs coss" . Each storage type must create an archive file `in squid/src/fs/$type.a` . This file is automatically linked into squid at compile time. @@ -181,7 +181,7 @@ struct _SwapDir { STFREE *freefs; /* Free the fs data */ STDBLCHECK *dblcheck; /* Double check the obj integrity */ STSTATFS *statfs; /* Dump fs statistics */ - STMAINTAINFS *maintainfs; /* Replacement maintainence */ + STMAINTAINFS *maintainfs; /* Replacement maintenance */ STCHECKOBJ *checkob; /* Check if the fs will store an object, and get the FS load */ /* These two are notifications */ STREFOBJ *refobj; /* Reference this object */ @@ -404,7 +404,7 @@ The IO callback should be called when an error occurs and when the object is closed. Once the IO callback is called, the *storeIOState* becomes invalid. -*STOBJCREATE* returns a *storeIOState* suitable for writing on sucess, +*STOBJCREATE* returns a *storeIOState* suitable for writing on success, or NULL if an error occurs. ### openobj @@ -730,7 +730,7 @@ The replacement policy can be updated during STOBJREAD/STOBJWRITE/STOBJOPEN/ STOBJCLOSE as well as STREFOBJ and STUNREFOBJ. Care should be taken to only modify the relevant replacement policy entries in the StoreEntry. The responsibility of replacement -policy maintainence has been moved into each SwapDir so that the storage +policy maintenance has been moved into each SwapDir so that the storage code can have tight control of the replacement policy. Cyclic filesystems such as COSS require this tight coupling between the storage layer and the replacement policy. @@ -961,7 +961,7 @@ createRemovalPolicy_(char *arguments)` This function creates the policy instance and populates it with at least the API methods supported. Currently all API calls are mandatory, but the policy implementation must make sure to NULL fill the structure prior to -populating it in order to assure future API compability. +populating it in order to assure future API compatibility. It should also populate the _data member with a pointer to policy specific data. diff --git a/docs/ProgrammingGuide/StoreClientInternals.md b/docs/ProgrammingGuide/StoreClientInternals.md index fb29571a..d7a0426c 100644 --- a/docs/ProgrammingGuide/StoreClientInternals.md +++ b/docs/ProgrammingGuide/StoreClientInternals.md @@ -23,7 +23,7 @@ undocumented storeclient API which primarily consists of - `storeClientCopy` to request some data from the object - `storeUnregister` to unregister to client from the StoreEntry. -client in this is "a internal reader of the StoreEntry", not neccesarily +client in this is "a internal reader of the StoreEntry", not necessarily a client of Squid.. But depending on "who" you are and why maybe this is not the interface diff --git a/docs/Releases/Squid-3.5.md b/docs/Releases/Squid-3.5.md index 16b42c4f..0c9386fa 100644 --- a/docs/Releases/Squid-3.5.md +++ b/docs/Releases/Squid-3.5.md @@ -63,13 +63,13 @@ Added in 3.5.13: Features removed in 3.5: - - COSS storage type has been superceded by + - COSS storage type has been superseded by [Rock](/Features/LargeRockStore) storage type. - - dnsserver helper has been superceded by DNS internal client. + - dnsserver helper has been superseded by DNS internal client. - - DNS helper API has been superceded by DNS internal client. + - DNS helper API has been superseded by DNS internal client. The intention with this series is to improve performance and HTTP support. Some remaining Squid-2.7 missing features are listed as diff --git a/docs/Releases/Squid-4.md b/docs/Releases/Squid-4.md index 0a16f937..c14b3975 100644 --- a/docs/Releases/Squid-4.md +++ b/docs/Releases/Squid-4.md @@ -20,7 +20,7 @@ - Remove [cache_peer_domain](http://www.squid-cache.org/Doc/config/cache_peer_domain) directive - - basic_msnt_multi_domain_auth: Superceeded by + - basic_msnt_multi_domain_auth: Superseded by basic_smb_lm_auth - Update [external_acl_type](http://www.squid-cache.org/Doc/config/external_acl_type) diff --git a/docs/RoadMap/Tasks.md b/docs/RoadMap/Tasks.md index 743916c6..e57e6994 100644 --- a/docs/RoadMap/Tasks.md +++ b/docs/RoadMap/Tasks.md @@ -73,7 +73,7 @@ done. - - - Cleanup Squid component macros that enable/disable components: - 1. .convention for Makefile.am conditionals is ENABLE_\* (currenty + 1. .convention for Makefile.am conditionals is ENABLE_\* (currently some have incorrect USE_\* maro names) - Helper and Tool Manuals 1. Write a manual/man(8) page for a helpers/ program that does not diff --git a/docs/SquidCodingGuidelines/AutoMake.md b/docs/SquidCodingGuidelines/AutoMake.md index 935f18eb..5d9c5385 100644 --- a/docs/SquidCodingGuidelines/AutoMake.md +++ b/docs/SquidCodingGuidelines/AutoMake.md @@ -23,7 +23,7 @@ [Features/SourceLayout](/Features/SourceLayout) - convenience libraries should be named for the subdirectory they are within. For example; foo/libfoo.la or foo/libfoosomething.la -- conveniece library names must contain only alphanumeric characters +- convenience library names must contain only alphanumeric characters 0-9 a-z, avoid upper case or punctuation **ENFORCED:** diff --git a/docs/SquidFaq/AboutSquid.md b/docs/SquidFaq/AboutSquid.md index 0ab6ecea..d4fe07b1 100644 --- a/docs/SquidFaq/AboutSquid.md +++ b/docs/SquidFaq/AboutSquid.md @@ -122,7 +122,7 @@ That question is best answered by the official mailing lists page at - [RFC 7235](https://datatracker.ietf.org/doc/html/rfc7235) - HTTP 1.1 Authentication -# What's the legal status of Squid? +# What is the legal status of Squid? Squid is copyrighted by The Squid Software Foundation and contributors. Squid copyright holders are listed in the CONTRIBUTORS file. diff --git a/docs/SquidFaq/BinaryPackages.md b/docs/SquidFaq/BinaryPackages.md index bbac9d0c..ae1cf491 100644 --- a/docs/SquidFaq/BinaryPackages.md +++ b/docs/SquidFaq/BinaryPackages.md @@ -47,7 +47,7 @@ answer would be "please upgrade to the latest version as distributed by us". So please rely on your operating system's community and bug reporting systems as your first line of support -But then, Squid users and develoers are a nice community with a genuine +But then, Squid users and developers are a nice community with a genuine desire to help and they will, if they can. The [squid users community](http://www.squid-cache.org/Support/mailing-lists.html#squid-users) diff --git a/docs/SquidFaq/BugReporting.md b/docs/SquidFaq/BugReporting.md index 750b4ec4..319914ab 100644 --- a/docs/SquidFaq/BugReporting.md +++ b/docs/SquidFaq/BugReporting.md @@ -50,7 +50,7 @@ may be due to one of the following reasons: needs write permissions to [coredump destination directory](#coredump-location) - sysctl options -: On systems such as FreeBSD, you won't get a coredump from programs that call +: On systems such as FreeBSD, you will not get a coredump from programs that call setuid() and/or setgid() (like Squid sometimes does) unless you enable this option: ``` @@ -84,9 +84,9 @@ may be due to one of the following reasons: version of Squid where the debug symbols have not been removed. - Threads and Linux : On Linux, threaded applications do not generate core dumps. When - you use the aufs cache_dir type, it uses threads and you can't + you use the aufs cache_dir type, it uses threads and you cannot get a coredump. -- It did leave a coredump file, you just can't find it. +- It did leave a coredump file, you just cannot find it. ## Resource Limits @@ -211,7 +211,7 @@ To disable systemd-coredump: If you CANNOT get Squid to leave a core file for you then one of the following approaches can be used -First alternative is to start Squid under the contol of GDB +First alternative is to start Squid under the control of GDB ``` % gdb /path/to/squid @@ -231,7 +231,7 @@ quit ## Using gdb debugger on a live proxy (with minimal downtime) The drawback from the above is that it isn't really suitable to run on a -production system as Squid then won't restart automatically if it +production system as Squid then will not restart automatically if it crashes. The good news is that it is fully possible to automate the process above to automatically get the stack trace and then restart Squid. Here is a short automated script that should work: diff --git a/docs/SquidFaq/ConfiguringBrowsers.md b/docs/SquidFaq/ConfiguringBrowsers.md index 048aa182..df65a9dd 100644 --- a/docs/SquidFaq/ConfiguringBrowsers.md +++ b/docs/SquidFaq/ConfiguringBrowsers.md @@ -288,7 +288,7 @@ URL for where your new *wpad.dat* file can be found. i.e. _http://www.example.com/wpad.dat_. -Test that that all works as per your script and network. There's no +Test that that all works as per your script and network. There is no point continuing until this works ... ### Automatic WPAD with DNS @@ -333,11 +333,11 @@ more reliable. by *Rodney van den Oever* -There's one nasty side-effect to using auto-proxy scripts: if you start +There is one nasty side-effect to using auto-proxy scripts: if you start the web browser it will try and load the auto-proxy-script. If your script isn't available either because the web server hosting the -script is down or your workstation can't reach the web server (e.g. +script is down or your workstation cannot reach the web server (e.g. because you're working off-line with your notebook and just want to read a previously saved HTML-file) you'll get different errors depending on the browser you use. @@ -346,7 +346,7 @@ The Netscape browser will just return an error after a timeout (after that it tries to find the site 'www.proxy.com' if the script you use is called 'proxy.pac'). -The Microsoft Internet Explorer on the other hand won't even start, no +The Microsoft Internet Explorer on the other hand will not even start, no window displays, only after about 1 minute it'll display a window asking you to go on with/without proxy configuration. diff --git a/docs/SquidFaq/ConfiguringSquid.md b/docs/SquidFaq/ConfiguringSquid.md index fb56d4b8..6c4367bc 100644 --- a/docs/SquidFaq/ConfiguringSquid.md +++ b/docs/SquidFaq/ConfiguringSquid.md @@ -60,7 +60,7 @@ the *etc* directory under the Squid installation directory ## How do I configure Squid to work behind a firewall? -If you are behind a firewall which can't make direct connections to the +If you are behind a firewall which cannot make direct connections to the outside world, you **must** use a parent cache. Normally Squid tries to be smart and only uses cache peers when it makes sense from a perspective of global hit ratio, and thus you need to tell Squid when it @@ -90,7 +90,7 @@ may not be able to lookup external domains. If you use *never_direct* and you have multiple parent caches, then you probably will want to mark one of them as a default choice in case Squid -can't decide which one to use. That is done with the *default* keyword +cannot decide which one to use. That is done with the *default* keyword on a *cache_peer* line. For example: cache_peer xyz.mydomain.com parent 3128 0 no-query default @@ -112,7 +112,7 @@ then some more temporary storage as work-areas, for instance when rebuilding *swap.state*. So in any case make sure to leave some extra room for this, or your cache will enter an endless crash-restart cycle. -The second reason is fragmentation (note, this won't apply to the COSS +The second reason is fragmentation (note, this will not apply to the COSS object storage engine - when it will be ready): filesystems can only do so much to avoid fragmentation, and in order to be effective they need to have the space to try and optimize file placement. If the disk is @@ -122,10 +122,10 @@ most likely be your worst bottleneck, by far offsetting the modest gain you got by having more storage. Let's see an example: you have a 9Gb disk (these times they're even hard -to find..). First thing, manifacturers often lie about disk capacity +to find..). First thing, manufacturers often lie about disk capacity (the whole Megabyte vs Mebibyte issue), and then the OS needs some space for its accounting structures, so you'll reasonably end up with 8Gib of -useable space. You then have to account for another 10% in overhead for +usable space. You then have to account for another 10% in overhead for Squid, and then the space needed for keeping fragmentation at bay. So in the end the recommended cache_dir setting is 6000 to 7000 Mebibyte. @@ -150,7 +150,7 @@ about using Squid in combination with http-gw from the [TIS toolkit](http://www.tis.com/). The most elegant way in my opinion is to run an internal Squid caching proxyserver which handles client requests and let this server forward it's requests to the http-gw running on the -firewall. Cache hits won't need to be handled by the firewall. +firewall. Cache hits will not need to be handled by the firewall. In this example Squid runs on the same server as the http-gw, Squid uses 8000 and http-gw uses 8080 (web). The local domain is home.nl. @@ -225,7 +225,7 @@ Advantages: Disadvantages: -- The internal Squid proxyserver can't (and shouldn't) work with other +- The internal Squid proxyserver cannot (and shouldn't) work with other parent or neighbor caches. - Initial requests are slower because these go through http-gw, http-gw also does reverse lookups. Run a nameserver on the firewall diff --git a/docs/SquidFaq/InnerWorkings.md b/docs/SquidFaq/InnerWorkings.md index 1205ab5d..86f048e0 100644 --- a/docs/SquidFaq/InnerWorkings.md +++ b/docs/SquidFaq/InnerWorkings.md @@ -69,7 +69,7 @@ The algorithm is somewhat more complicated when firewalls are involved. The [cache_peer](http://www.squid-cache.org/Doc/config/cache_peer) **no-query** option can be used to skip the ICP queries if the only -appropriate source is a parent cache (i.e., if there's only one place +appropriate source is a parent cache (i.e., if there is only one place you'd fetch the object from, why bother querying?) ## What features are Squid developers currently working on? @@ -238,7 +238,7 @@ But my busy caches have much lower times: ## How does Squid deal with Cookies? The presence of Cookies headers in **requests** does not affect whether -or not an HTTP reply can be cached. Similarly, the presense of +or not an HTTP reply can be cached. Similarly, the presence of *Set-Cookie* headers in **replies** does not affect whether the reply can be cached. @@ -276,7 +276,7 @@ rules. The refresh parameters are: - URL regular expression - *CONF_MIN*: The time (in minutes) an object without an explicit expiry time should be considered fresh. The recommended value is 0, - any higher values may cause dynamic applications to be erronously + any higher values may cause dynamic applications to be erroneously cached unless the application designer has taken the appropriate actions. - *CONF_PERCENT*: A percentage of the objects age (time since last @@ -329,13 +329,13 @@ the server-side reads. ## Why is my cache's inbound traffic equal to the outbound traffic? *I've been monitoring the traffic on my cache's ethernet adapter an -found a behavior I can't explain: the inbound traffic is equal to the +found a behavior I cannot explain: the inbound traffic is equal to the outbound traffic. The differences are negligible. The hit ratio reports 40%. Shouldn't the outbound be at least 40% greater than the inbound?* by [David J N Begley](mailto:david@avarice.nepean.uws.edu.au) -I can't account for the exact behavior you're seeing, but I can offer +I cannot account for the exact behavior you're seeing, but I can offer this advice; whenever you start measuring raw Ethernet or IP traffic on interfaces, you can forget about getting all the numbers to exactly match what Squid reports as the amount of traffic it has sent/received. @@ -372,7 +372,7 @@ external Internet sites or from internal (to the organization) clients (making requests). If you want that, try looking at RMON2. Also, if you're talking about a 40% hit rate in terms of object -requests/counts then there's absolutely no reason why you should expect +requests/counts then there is absolutely no reason why you should expect a 40% reduction in traffic; after all, not every request/object is going to be the same size so you may be saving a lot in terms of requests but very little in terms of actual traffic. @@ -388,8 +388,8 @@ something like this: older than [Squid-3.2](/Releases/Squid-3.2). - Responses with *Cache-Control: No-Store* are NOT cachable. - Responses for requests with an *Authorization* header are cachable - ONLY if the reponse includes *Cache-Control: Public* or some other - special parameters controling revalidation. + ONLY if the response includes *Cache-Control: Public* or some other + special parameters controlling revalidation. - The following HTTP status codes are cachable: - 200 OK - 203 Non-Authoritative Information @@ -644,7 +644,7 @@ TCP allows connections to be in a "half-closed" state. This is accomplished with the *shutdown(2)* system call. In Squid, this means that a client has closed its side of the connection for writing, but leaves it open for reading. Half-closed connections are tricky because -Squid can't tell the difference between a half-closed connection, and a +Squid cannot tell the difference between a half-closed connection, and a fully closed one. If Squid tries to read a connection, and *read()* returns 0, and Squid @@ -752,7 +752,7 @@ beginning. This header is used to store the URL MD5, some `StoreEntry` data, and more. When Squid opens a disk file for reading, it looks for the meta data header and unpacks it. -This warning means that Squid couln't unpack the meta data. This is +This warning means that Squid couldn't unpack the meta data. This is non-fatal bug, from which Squid can recover. Perhaps the meta data was just missing, or perhaps the file got corrupted. @@ -781,7 +781,7 @@ fails. Squid handles this as a failed ident lookup. then why not bind the local endpoint to the host's (intranet) IP address? Why make the masses suffer needlessly?* -Because thats just how ident works. Please read +Because that's just how ident works. Please read [RFC 931](ftp://ftp.isi.edu/in-notes/rfc931.txt), in particular the RESTRICTIONS section. diff --git a/docs/SquidFaq/InstallingSquid.md b/docs/SquidFaq/InstallingSquid.md index 6f315f60..92108af1 100644 --- a/docs/SquidFaq/InstallingSquid.md +++ b/docs/SquidFaq/InstallingSquid.md @@ -397,7 +397,7 @@ exit immediately, without closing any connections or log files. Use this only as a last resort. **-k debug** Sends an *USR2* signal, which causes Squid to generate full -debugging messages until the next *USR2* signal is recieved. Obviously +debugging messages until the next *USR2* signal is received. Obviously very useful for debugging problems. **-k check** Sends a "*ZERO*" signal to the Squid process. This simply @@ -480,7 +480,7 @@ Yes. Running Squid on native ZFS-supporting systems, like Solaris or In general, just set up ZFS mirror (usually the best with separate controllers for each spindle) and set recordsize 4-64k (depending your -cache prefferable cache_replacement_policy). Also it can better for +cache preferable cache_replacement_policy). Also it can better for disk IO performance to change primarycache=metadata and secondarycache=none, and atime=off on cache_dir filesystems. Consider to correctly set **logbias** property for zfs fs which Squid's cache diff --git a/docs/SquidFaq/InterceptionProxy.md b/docs/SquidFaq/InterceptionProxy.md index 1da383ce..7cdce68b 100644 --- a/docs/SquidFaq/InterceptionProxy.md +++ b/docs/SquidFaq/InterceptionProxy.md @@ -28,7 +28,7 @@ outlined by *Mark Elsen*: common). - Connection multiplexing does not work. Clients aware of the proxy can send requests for multiple domains down one proxy connection and - save resources while letting teh proxy do multiple backend + save resources while letting the proxy do multiple backend connections. When talking to an origin clients are not permitted to do this and will open many TCP connections for resources. This causes intercepting proxy to consume more network sockets than a @@ -36,7 +36,7 @@ outlined by *Mark Elsen*: - Proxy authentication does not work. - IP based authentication by the origin fails because the users are all seen to come from the Interception Cache's own IP address. -- You can't use IDENT lookups (which are inherently very insecure +- You cannot use IDENT lookups (which are inherently very insecure anyway) - ARP relay breaks at the proxy machine. - Interception Caching only supports the HTTP protocol, not gopher, @@ -308,7 +308,7 @@ map\! *[John](mailto:John.Saunders@scitec.com.au)* notes that you may be able to get around this bug by carefully writing your access lists. If the last/default rule is to permit then this bug would be a problem, but if -the last/default rule was to deny then it won't be a problem. I guess +the last/default rule was to deny then it will not be a problem. I guess fragments, other than the first, don't have the information available to properly policy route them. Normally TCP packets should not be fragmented, at least my network runs an MTU of 1500 everywhere to avoid @@ -336,7 +336,7 @@ First, configure Squid for interception caching as detailed at the Next, configure the Foundry layer 4 switch to redirect traffic to your Squid box or boxes. By default, the Foundry redirects to port 80 of your -squid box. This can be changed to a different port if needed, but won't +squid box. This can be changed to a different port if needed, but will not be covered here. In addition, the switch does a "health check" of the port to make sure @@ -800,7 +800,7 @@ Linux kernel, as if you are you simply need to modprobe the module to gain it's functionality. Ensure that the GRE code is either built as static or as a module by -chosing the appropriate option in your kernel config. Then rebuild your +choosing the appropriate option in your kernel config. Then rebuild your kernel. If it is a module you will need to: modprobe ip_gre @@ -954,7 +954,7 @@ will want to read on to our troubleshooting section below. - Have you tried unloading ALL firewall rules on your cache and/or the inside address of your network device to see if that helps? If your router or cache are inadvertently blocking or dropping either the - WCCP control traffic or the GRE, things won't work. + WCCP control traffic or the GRE, things will not work. - If you are using WCCP on a cisco router or switch, is the router seeing your cache? Use the command show ip wccp web-cache detail - Look in your logs both in Squid (cache.log), and on your @@ -1004,7 +1004,7 @@ information including the versions of your router, proxy, operating system, your traffic redirection rules, debugging output and any other things you have tried to the squid-users mailing list. -### Why can't I use authentication together with interception proxying? +### Why cannot I use authentication together with interception proxying? Interception Proxying works by having an active agent (the proxy) where there should be none. The browser is not expecting it to be there, and diff --git a/docs/SquidFaq/OperatingSquid.md b/docs/SquidFaq/OperatingSquid.md index c0198221..937847fb 100644 --- a/docs/SquidFaq/OperatingSquid.md +++ b/docs/SquidFaq/OperatingSquid.md @@ -108,7 +108,7 @@ It might take a while, depending on who busy your cache is 1. You must shutdown Squid:` squid -k shutdown` 1. Once Squid exits, you may immediately start it up again. -Since you deleted the old **cache_dir** from squid.conf, Squid won't +Since you deleted the old **cache_dir** from squid.conf, Squid will not try to access that directory. If you use the RunCache script, Squid should start up again automatically. @@ -262,7 +262,7 @@ object will not have changed, so the result is TCP_IMS_HIT. Squid will only return TCP_IMS_MISS if some other client causes a newer version of the object to be pulled into the cache. -## Why do I need to run Squid as root? why can't I just use cache_effective_user root? +## Why do I need to run Squid as root? why cannot I just use cache_effective_user root? - *by Antony Stone and Dave J Woolley* diff --git a/docs/SquidFaq/OrderIsImportant.md b/docs/SquidFaq/OrderIsImportant.md index 6e9d095a..d8070620 100644 --- a/docs/SquidFaq/OrderIsImportant.md +++ b/docs/SquidFaq/OrderIsImportant.md @@ -66,7 +66,7 @@ credentials are not present. The placement of these tests affects which rules around them require authentication. Similarly [acl](http://www.squid-cache.org/Doc/config/acl) testing -authentication placement left-to-right on their line determins whether +authentication placement left-to-right on their line determines whether the test bypasses, fails or triggers an auth challenges. ## Access Controls diff --git a/docs/SquidFaq/RAID.md b/docs/SquidFaq/RAID.md index a9a56f7d..71aa8662 100644 --- a/docs/SquidFaq/RAID.md +++ b/docs/SquidFaq/RAID.md @@ -83,7 +83,7 @@ large. As squid mostly deals with small I/O operations in the KB range randomly spread out over a large number of files RAID0 do not provide any -benefits for Squid and only the drawbacks of loosing the whole cache +benefits for Squid and only the drawbacks of losing the whole cache should a single drive fail. The choice of diff --git a/docs/SquidFaq/RelatedSoftware.md b/docs/SquidFaq/RelatedSoftware.md index 32c15251..04bab49d 100644 --- a/docs/SquidFaq/RelatedSoftware.md +++ b/docs/SquidFaq/RelatedSoftware.md @@ -91,4 +91,4 @@ kernel-based layer 3-7 load balancer for Linux The [Cacheability Engine](http://www.mnot.net/cacheability/) is a python script that validates an URL, analyzing the clues a web server gives to -understand how cacheable is the served content. \ No newline at end of file +understand how cachable is the served content. \ No newline at end of file diff --git a/docs/SquidFaq/SecurityPitfalls.md b/docs/SquidFaq/SecurityPitfalls.md index c0f3ec35..20b3a33c 100644 --- a/docs/SquidFaq/SecurityPitfalls.md +++ b/docs/SquidFaq/SecurityPitfalls.md @@ -137,7 +137,7 @@ details is not a good thing. For this reason the **very top** access control in Squid limits manager access on only be available to the special localhost IP. - acl manger url_regex -i ^cache_object:// /squid-internal-mgr/ + acl manager url_regex -i ^cache_object:// /squid-internal-mgr/ http_access allow localhost manager http_access deny manager diff --git a/docs/SquidFaq/SquidAcl.md b/docs/SquidFaq/SquidAcl.md index f6c5c35f..edf122ec 100644 --- a/docs/SquidFaq/SquidAcl.md +++ b/docs/SquidFaq/SquidAcl.md @@ -124,7 +124,7 @@ consists of a *list of values*. When checking for a match, the multiple values use OR logic. In other words, an ACL element is *matched* when any one of its values is a match. -You can't give the same name to two different types of ACL elements. It +You cannot give the same name to two different types of ACL elements. It will generate a syntax error. You can put different values for the same ACL name on different lines. @@ -615,7 +615,7 @@ change *url_regex* to *dstdomain* in this example. spammers. By blocking the spammer web sites in squid, users can no longer use up bandwidth downloading spam images and html. Even more importantly, they can no longer send out requests for things like - scripts and gifs that have a unique identifer attached, showing that + scripts and gifs that have a unique identifier attached, showing that they opened the email and making their addresses more valuable to the spammer. --> @@ -673,7 +673,7 @@ Similarly, if you said that *co.us* is GREATER than *fff.co.us*, then the Splay tree searching algorithm might never discover *co.us* as a match for *bbb.co.us*. -The bottom line is that you can't have one entry that is a subdomain of +The bottom line is that you cannot have one entry that is a subdomain of another. Squid will warn you if it detects this condition. ## Why does Squid deny some port numbers? @@ -789,7 +789,7 @@ Add some *arp* ACL lines to your squid.conf: http_access allow M2 http_access deny all -Run **squid -k parse** to confirm that the ARP / EUI supprot is +Run **squid -k parse** to confirm that the ARP / EUI support is available and the ACLs are going to work. ## Can I limit the number of connections from a client? diff --git a/docs/SquidFaq/SquidLogs.md b/docs/SquidFaq/SquidLogs.md index db3b87e8..94049fe4 100644 --- a/docs/SquidFaq/SquidLogs.md +++ b/docs/SquidFaq/SquidLogs.md @@ -86,7 +86,7 @@ outlined in the [KnowledgeBase](/KnowledgeBase): 3. [Host Header Forgery](/KnowledgeBase/HostHeaderForgery) 4. [Queue congestion](/KnowledgeBase/QueueCongestion) 5. [Too Many Queued Requests](/KnowledgeBase/TooManyQueued) -6. [Unparseable Header](/KnowledgeBase/UnparseableHeader) +6. [Unparsable Header](/KnowledgeBase/UnparseableHeader) ## access.log @@ -147,7 +147,7 @@ underscore characters) which describe the response sent to the client. | **TUNNEL** | A binary tunnel was established for this transaction. | - These tags are optional and describe some error conditions which - occured during response delivery (if any): + occurred during response delivery (if any): | --- | --- | | **ABORTED** | A client-to-Squid or Squid-to-server connection was closed unexpectedly, usually due to an I/O error or clean transport connection closure in the middle of some higher-level protocol message/negotiation. Before Squid v6, this tag was primarily seen when the client closed its connection to Squid before Squid could deliver the entire response. Since Squid v6, the tag also appears when Squid communication with an origin server or cache_peer is impossible (e.g., the server is refusing TCP connections) or aborted (e.g., an EOF in the middle of a chunked HTTP response body transfer). | @@ -530,7 +530,7 @@ they do not become very large. > :warning: Logging is very important to Squid. In fact, it is so important that it will shut itself down if it - can't write to its logfiles. This includes cases such as a full log disk, + cannot write to its logfiles. This includes cases such as a full log disk, or logfiles getting too big. ## My log files get very big! @@ -669,7 +669,7 @@ whole load of possible problems. > :warning: Logging is very important to Squid. In fact, it is so important that it will shut itself down if it - can't write to its logfiles. + cannot write to its logfiles. There are several alternatives which are much safer to setup and use. The basic capabilities present are : diff --git a/docs/SquidFaq/SquidMemory.md b/docs/SquidFaq/SquidMemory.md index d9a97f0e..082ea9bf 100644 --- a/docs/SquidFaq/SquidMemory.md +++ b/docs/SquidFaq/SquidMemory.md @@ -164,7 +164,7 @@ has reached. by [HenrikNordström](/HenrikNordstrom) Messages like "FATAL: xcalloc: Unable to allocate 4096 blocks of 1 -bytes!" appear when Squid can't allocate more memory, and on most +bytes!" appear when Squid cannot allocate more memory, and on most operating systems (inclusive BSD) there are only two possible reasons: - The machine is out of swap @@ -303,7 +303,7 @@ There are a number of things to try: [cache_mem](http://www.squid-cache.org/Doc/config/cache_mem) parameter in the config file. This controls how many "hot" objects are kept in memory. Reducing this parameter will not significantly - affect performance, but you may recieve some warnings in *cache.log* + affect performance, but you may receive some warnings in *cache.log* if your cache is busy. - Turn the [memory_pools](http://www.squid-cache.org/Doc/config/memory_pools) @@ -388,7 +388,7 @@ allocating more to Squid via [cache_mem](http://www.squid-cache.org/Doc/config/cache_mem) will not help. -## Why can't my Squid process grow beyond a certain size? +## Why cannot my Squid process grow beyond a certain size? by [Adrian Chadd](/AdrianChadd) @@ -405,7 +405,7 @@ memory. Here are some things to keep in mind. documentation for specific details. - Some malloc implementations may not support \> 2gb of memory - eg dlmalloc. Don't use dlmalloc unless your platform is very broken - (and then realise you won't be able to use \>2gb RAM using it.) + (and then realise you will not be able to use \>2gb RAM using it.) - Make sure the Squid has been compiled to be a 64 bit binary (with modern Unix-like OSes you can use the 'file' command for this); some platforms may have a 64 bit kernel but a 32 bit userland, or the diff --git a/docs/SquidFaq/SquidProfiling.md b/docs/SquidFaq/SquidProfiling.md index 75acaa4b..c55bcb36 100644 --- a/docs/SquidFaq/SquidProfiling.md +++ b/docs/SquidFaq/SquidProfiling.md @@ -17,7 +17,7 @@ to do this, so don't worry too much! Squid is a CPU-intensive application (since, after all, it spends all of its time processing incoming data and generating data to send.) But -there's many different types of CPU usage which can identify what you're +there are many different types of CPU usage which can identify what you're running out of. - CPU spent in user-space: This is the CPU time spent by the Squid @@ -52,7 +52,7 @@ resource variables and watch usage trends. ### What sort of things impact the performance of my Squid ? Squid will start suffering if you run out of any of your server -resources. There's a few things that frequently occur: +resources. There are a few things that frequently occur: - You just plain run out of CPU. This is where all of your resources are low save your kernel and user CPU usage. This may be because @@ -70,7 +70,7 @@ resources. There's a few things that frequently occur: the system. Gigabit network cards are a good example of this. You trade off a few ms of latency versus a high interrupt load, but this doesn't matter on a server which is constantly handling packets. - Take a look at your hardware documentation and see whats available. + Take a look at your hardware documentation and see what is available. - Linux servers spending a lot of time in IOWAIT can also be because you're overloading your disks with IO. See what your disk IO looks like in vmstat. You could look at moving to the aufs/diskd @@ -90,7 +90,7 @@ resources. There's a few things that frequently occur: The best thing you can do to identify where all your CPU usage is going is to use a process or system profiler. Personally, I use oprofile. -gprof isn't at all accurate with modern CPU clockspeeds. There's other +gprof isn't at all accurate with modern CPU clockspeeds. There are other options - hwpmc under FreeBSD, for example, can do somewhat what oprofile can but it currently has trouble getting any samples from Squid in userspace. Grr. *perfmon* is also an option if you don't have root @@ -101,10 +101,10 @@ OProfile under Linux is easy to use and has quite a low overhead. Here's how I use oprofile: - Install oprofile -- Check whats available - *opcontrol -l* +- Check what is available - *opcontrol -l* - If you see a single line regarding "timer interrupt mode", you're stuffed. Go read the OProfile FAQ and see if you can enable ACPI. - You won't get any meaningful results out of OProfile in timer + You will not get any meaningful results out of OProfile in timer interrupt mode. - Set it up - *opcontrol --setup -c 4 -p library,kernel --no-vmlinux* (if you have a vmlinux image, read the opcontrol manpage for @@ -118,7 +118,7 @@ Here's how I use oprofile: Just remember: - Make sure you've got the debugging libraries and library symbols - installed - under Ubuntu thats 'libc6-dbg'. + installed - under Ubuntu that's 'libc6-dbg'. - Don't try using it under timer interrupt mode, it'll suffer similar accuracy issues to gprof and other timer-based profilers. diff --git a/docs/SquidFaq/SystemWeirdnesses.md b/docs/SquidFaq/SystemWeirdnesses.md index 516b2219..06883f44 100644 --- a/docs/SquidFaq/SystemWeirdnesses.md +++ b/docs/SquidFaq/SystemWeirdnesses.md @@ -27,7 +27,7 @@ Voinov). ## select() -*select(3c)* won't handle more than 1024 file descriptors. The +*select(3c)* will not handle more than 1024 file descriptors. The *configure* script should enable *poll()* by default for Solaris. *poll()* allows you to use many more filedescriptors, probably 8192 or more. @@ -68,7 +68,7 @@ Systems running without nscd may fail on such calls if first 256 files are all in use. Since solaris 2.6 Sun has changed the way some system calls work and is -using *nscd* daemon as a implementor of them. To communicate to *nscd* +using *nscd* daemon as a implementer of them. To communicate to *nscd* Solaris is using undocumented calls. Basically *nscd* is used to reduce memory usage of user-space system libraries that use passwd and group files. Before 2.6 Solaris cached full passwd file in library @@ -147,7 +147,7 @@ NOTICE: realloccg /proxy/cache: file system full NOTICE: alloc: /proxy/cache: file system full ``` -In a nutshell, the UFS filesystem used by Solaris can't cope with the +In a nutshell, the UFS filesystem used by Solaris cannot cope with the workload squid presents to it very well. The filesystem will end up becoming highly fragmented, until it reaches a point where there are insufficient free blocks left to create files with, and only fragments @@ -336,9 +336,9 @@ to the file */etc/sysctl.conf*: # Linux -## Can't connect to some sites through Squid +## Cannot connect to some sites through Squid -When using Squid, some sites may give erorrs such as "(111) Connection +When using Squid, some sites may give errors such as "(111) Connection refused" or "(110) Connection timed out" although these sites work fine without going through Squid. diff --git a/docs/SquidFaq/TermIndex.md b/docs/SquidFaq/TermIndex.md index 087e350d..1138002e 100644 --- a/docs/SquidFaq/TermIndex.md +++ b/docs/SquidFaq/TermIndex.md @@ -10,7 +10,7 @@ cache is one that you have defined with the *cache_peer* configuration option. Neighbor refers to either a parent or a sibling. In Harvest 1.4, neighbor referred to what Squid calls a sibling. That -is, Harvest had *parents* and *neighbors*. For backward compatability, +is, Harvest had *parents* and *neighbors*. For backward compatibility, the term neighbor is still accepted in some Squid configuration options. ## Regular Expression diff --git a/docs/SquidFaq/ToomanyMisses.md b/docs/SquidFaq/ToomanyMisses.md index 10a46232..e492faa2 100644 --- a/docs/SquidFaq/ToomanyMisses.md +++ b/docs/SquidFaq/ToomanyMisses.md @@ -13,7 +13,7 @@ with the actual cache contents. Here's a script I use to make **sure** this doesn't happen. It's way too paranoid, doing a lot of unnecessary things including throwing away -what's in the cache every time. But it always works. +what is in the cache every time. But it always works. ## sample script diff --git a/docs/SquidFaq/TroubleShooting.md b/docs/SquidFaq/TroubleShooting.md index 7a8446e0..c9d0a142 100644 --- a/docs/SquidFaq/TroubleShooting.md +++ b/docs/SquidFaq/TroubleShooting.md @@ -403,7 +403,7 @@ show you which processes own every open file descriptor on your system. This means that the client socket was closed by the client before Squid was finished sending data to it. Squid detects this by trying to read(2)*some data from the socket. If the*read(2)*call fails, then Squid -konws the socket has been closed. Normally the*read(2)*call +knows the socket has been closed. Normally the*read(2)*call returns*ECONNRESET: Connection reset by peer*and these are NOT logged. Any other error messages (such as*EPIPE: Broken pipe*are logged to*cache.log*. See the "intro" of section 2 of your Unix manual for a @@ -556,7 +556,7 @@ until you learn some more about Unix. As a reference, I suggest ## pingerOpen: icmp_sock: (13) Permission denied -This means your pinger helper program does not have root priveleges. +This means your pinger helper program does not have root privileges. You should either do this when building Squid: make install pinger @@ -592,7 +592,7 @@ incoming request, it knows there is a forwarding loop somewhere. forwarding loops are correctly detected. When Squid detects a forwarding loop, it is logged to the *cache.log* -file with the recieved *Via* header. From this header you can determine +file with the received *Via* header. From this header you can determine which cache (the last in the list) forwarded the request to you. > :bulb: @@ -625,7 +625,7 @@ this: CONNECT www.buy.com:443 HTTP/1.0 Then Squid opens a TCP connection to the destination host and port, and -the *real* request is sent encrypted over this connection. Thats the +the *real* request is sent encrypted over this connection. That's the whole point of SSL, that all of the information must be sent encrypted. With this client bug, however, Squid receives a request like this: @@ -642,7 +642,7 @@ message. browser is sending sensitive information unencrypted over the network. -##Squid can't access URLs like http://3626046468/ab2/cybercards/moreinfo.html +##Squid cannot access URLs like http://3626046468/ab2/cybercards/moreinfo.html by Dave J Woolley (DJW at bts dot co dot uk) @@ -685,7 +685,7 @@ the hostname part of a URL: allowed in URI's and URL's. Unfortunately, a number of web services generate URL's with whitespace. -Of course your favorite browser silently accomodates these bad URL's. +Of course your favorite browser silently accommodates these bad URL's. The servers (or people) that generate these URL's are in violation of Internet standards. The whitespace characters should be encoded. @@ -712,7 +712,7 @@ Others are technically violations and should not be performed. The broken web service should be fixed instead. It is breaking much more of the Internet than just your proxy. -## commBind: Cannot bind socket FD 5 to 127.0.0.1:0: (49) Can't assign requested address +## commBind: Cannot bind socket FD 5 to 127.0.0.1:0: (49) Cannot assign requested address This likely means that your system does not have a loopback network device, or that device is not properly configured. All Unix systems @@ -782,13 +782,13 @@ This error message usually means that the *squid.pid* file is missing. Since the PID file is normally present when squid is running, the absence of the PID file usually means Squid is not running. If you accidentally delete the PID file, Squid will continue running, -and you won't be able to send it any signals. +and you will not be able to send it any signals. > :information_source: If you accidentally removed the PID file, there are two ways to get it back. -First locate the proces ID by running *ps* and find Squid. You'll +First locate the process ID by running *ps* and find Squid. You'll probably see two processes, like this: % ps ax | grep squid @@ -808,7 +808,7 @@ process id number there. For example: > :warning: Be careful of file permissions. It's no use having a .pid file if - squid can't update it when things change. + squid cannot update it when things change. The second is to use the above technique to find the Squid process id. Then to send the process a HUP signal, which is the same as *squid -k @@ -821,7 +821,7 @@ The reconfigure process creates a new PID file automatically. ## FATAL: getgrnam failed to find groupid for effective group 'nogroup' You are probably starting Squid as root. Squid is trying to find a -group-id that doesn't have any special priveleges that it will run as. +group-id that doesn't have any special privileges that it will run as. The default is **nogroup**, but this may not be defined on your system. The best fix for this is to assign squid a low-privilege user-id and @@ -853,7 +853,7 @@ The bad configuration of IE is the use of a active configuration script will only use the proxy.pac. Cydoor aps will use both and will generate the errors. -Disabling the old proxy settings in IE is not enought, you should delete +Disabling the old proxy settings in IE is not enough, you should delete them completely and only use the proxy.pac for example. ## Requests for international domain names do not work @@ -905,7 +905,7 @@ You may be able to use *tcpdump* to track down and observe the problem. user reports that his Zero Sized Reply problem went away when he told Internet Explorer to not accept third-party cookies. -Here are some things you can try to reduce the occurance of the Zero +Here are some things you can try to reduce the occurrence of the Zero Sized Reply error: - Delete or rename your cookie file and configure your browser to diff --git a/docs/SquidFaq/WindowsUpdate.md b/docs/SquidFaq/WindowsUpdate.md index 37dc15bf..65124bde 100644 --- a/docs/SquidFaq/WindowsUpdate.md +++ b/docs/SquidFaq/WindowsUpdate.md @@ -19,7 +19,7 @@ requests. Particularly when large objects are involved. to be cached. It will however, cache nicely provided the size limit is set high enough. - **[range_offset_limit](http://www.squid-cache.org/Doc/config/range_offset_limit)**. - Does the main work of converting range requests into cacheable + Does the main work of converting range requests into cachable requests. Use the same size limit as [maximum_object_size](http://www.squid-cache.org/Doc/config/maximum_object_size) to prevent conversion of requests for objects which will not cache @@ -107,7 +107,7 @@ stored in the squid cache. I also recommend a 30 to 60GB [cache_dir](http://www.squid-cache.org/Doc/config/cache_dir) size allocation, which will let you download tonnes of windows updates and -other stuff and then you won't really have any major issues with cache +other stuff and then you will not really have any major issues with cache storage or cache allocation or any other issues to do with the cache. # Why does it go so slowly through Squid? diff --git a/docs/Technology/WPAD.md b/docs/Technology/WPAD.md index 002df240..dff3fc8f 100644 --- a/docs/Technology/WPAD.md +++ b/docs/Technology/WPAD.md @@ -36,7 +36,7 @@ Windows Active Directory Group Policy. - [Fully Automatically Configuring Browsers for WPAD with DHCP](/SquidFaq/ConfiguringBrowsers) faq article - [WPAD DNS](/Technology/WPAD/DNS) - covers how User Agents can detect the existance of the proxy + covers how User Agents can detect the existence of the proxy autoconfiguration file via DNS "Well Known Aliases" ## Other Articles and Information on WPAD diff --git a/docs/Technology/WPAD/DNS.md b/docs/Technology/WPAD/DNS.md index 753f7526..35336922 100644 --- a/docs/Technology/WPAD/DNS.md +++ b/docs/Technology/WPAD/DNS.md @@ -2,7 +2,7 @@ ## Overview -WPAD can use DNS to probe for the existance of a WPAD web server to +WPAD can use DNS to probe for the existence of a WPAD web server to fetch the proxy configuration file from. The WPAD specification enumerates a number of possibilities; the only required DNS method is the "Well known alias" method. diff --git a/docs/ToDo.md b/docs/ToDo.md index 18e847b0..c8313d7d 100644 --- a/docs/ToDo.md +++ b/docs/ToDo.md @@ -44,7 +44,7 @@ This TODO list is no longer accurate. For more updated Squid plans see: - [ ] refactoring of acl driven types to reduce amount of duplicated code (acl_check, acl_tos, acl_address, acl_size_t, ...) - [ ] ETag caching (???) -- [ ] Generalize socket binding to allow for multipe ICP/HTCP/SNMP sockets +- [ ] Generalize socket binding to allow for multiple ICP/HTCP/SNMP sockets (get rid of udp_incoming_address) (???) - [ ] Rework the store API like planned - [ ] Improved event driven comm code diff --git a/docs/TranslationGuidelines.md b/docs/TranslationGuidelines.md index bb0e8c37..8cf2a231 100644 --- a/docs/TranslationGuidelines.md +++ b/docs/TranslationGuidelines.md @@ -106,7 +106,7 @@ maintainer ([AmosJeffries](/AmosJeffries) at present). **.PO** files need to have ISO-639 code information to indicate the language, and if possible the country ISO-3166 variant code as well. ``` - * Alhpabet used if there are a range of alphabets used for the + * Alphabet used if there are a range of alphabets used for the language (ie Latin and Cyrillic) * If you don't know these codes, an indication of that info may be just as useful (ie american english, or british english, not