Install and Configure Squid Proxy Server, ClamAV, SquidClamav, C-ICAP Server – Debian 9 (minimal – standard system utilities, ssh server)

1. Install some needed dependencies.
$sudo apt-get install gcc make curl libcurl4-gnutls-dev rsync

2. Install and Configure Squid Proxy Server.
$sudo apt-get install squid3 calamaris

3.Edit config file /etc/squid/squid.conf:
$sudo cp squid.conf squid.conf.ORIG

Backup
$sudo cp /etc/squid/squid.conf /etc/squid/squid.conf.bak

To simplify the configuration file (squid.conf), we can remove everything that is comments or blank lines.
$sudo cat squid.conf.bak | egrep -v -e '^[[:blank:]]*#|^$' > squid.conf

$sudo nano /etc/squid/squid.conf

3.1 Change squid.conf options

Make sure the line is uncommented (#).
acl CONNECT method CONNECT

Create new access lists acl LAN to your internal network 192.168.0.0/24. or others internal network
acl LAN src 192.168.0.0/24
acl LAN src xxx.xxx.x.x/24

Additional access lists blacklist, whitelist, malware_block_list to block spam, commercials, malware, viruses…

acl malware_block_list url_regex -i "/etc/squid/malware_block_list"
acl blacklist dstdom_regex "/etc/squid/blacklist"
acl whitelist dstdom_regex "/etc/squid/whitelist"

Access new acl lists – order matters:

http_access allow whitelist
http_access deny blacklist
http_access deny malware_block_list
http_access allow LAN

I did not use: Inform users about blocked website. Blocked commercials will be displayed as empty transparent place, require http server.
deny_info http://YourServerName/error/dot-transparent.png blacklist
deny_info http://YourServerName/error/dot-transparent.png whitelist
deny_info http://YourServerName/error/error.html malware_block_list

Setup address IP and listening port. Transparent mean no caching.
http_port 192.168.0.1:3128

Additional  setup – Anonymizer. Blocking headers:
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access Referer deny all
request_header_access X-Forwarded-For deny all
request_header_access Via deny all
request_header_access All deny all
request_header_access Cache-Control deny all
httpd_suppress_version_string on

Store cache ojects only in memory, cache (400MB)

## Cache options

cache_mem 512 MB
cache_dir ufs /var/spool/squid3 400 16 256
  ### cache_dir ufs /usr/local/squid/cache 51200 64 256

Disable cache for access list – LAN:
cache deny LAN

Hostname
visible_hostname YourServerName

Hiding IP
forwarded_for off

##———————- My test config ——————————————
acl LAN src 192.168.0.0/24
acl LAN src 192.168.100.0/24
acl LAN src 192.168.122.0/24

acl SSL_ports port 443
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http
acl CONNECT method CONNECT

acl malware_block_list url_regex -i "/etc/squid/malware_block_list"
acl blacklist dstdom_regex "/etc/squid/blacklist"
acl whitelist dstdom_regex "/etc/squid/whitelist"
http_access allow whitelist
http_access deny blacklist
http_access deny malware_block_list

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow LAN
http_access allow localhost
http_access deny all

visible_hostname proxy
http_port 3128
coredump_dir /var/spool/squid

refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    20%    4320

request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access Referer deny all
request_header_access X-Forwarded-For deny all
request_header_access Via deny all
request_header_access All deny all
request_header_access Cache-Control deny all
httpd_suppress_version_string on

 

## Cache options

cache_mem 512 MB
cache_dir ufs /var/spool/squid 400 16 256
cache deny LAN
visible_hostname proxy
forwarded_for off

##———————————————–——–——–——–

3.2 Download files: blacklist and whitelist, unpack and save it to /etc/squid/.
$wget -c https://ffteixeira.net/blog/sites/default/files/blacklist.tar_.bz2 (rename blacklist.tar_.bz2 to blacklist.tar.bz2) || wget -c http://terminal28.com/wp-content/uploads/2013/10/blacklist.tar.bz2
$sudo tar -xvf blacklist.tar.bz2
$sudo mv blacklist whitelist /etc/squid

Before trying to start Squid, you should verify that your squid.conf file makes sense. This is easy to do. Just run the following command:
$sudo squid -k parse

Ignore this error, solved next step:
…/08/11 12:23:16| Processing: acl malware_block_list url_regex -i "/etc/squid/malware_block_list"
…/08/11 12:23:16| ERROR: Can not open file /etc/squid/malware_block_list for reading
…/08/11 12:23:16| Warning: empty ACL: acl malware_block_list url_regex -i "/etc/squid/malware_block_list"
…/08/11 12:23:16| Processing: acl blacklist dstdom_regex "/etc/squid/blacklist"
…/08/11 12:23:17| /etc/squid/squid.conf line 20: acl blacklist dstdom_regex "/etc/squid/blacklist"
…/08/11 12:23:17| WARNING: there are more than 100 regular expressions. Consider using less REs or use rules without expressions like 'dstdomain'.

Restart Squid.
$sudo /etc/init.d/squid restart

3.3 Download script malware_block_list to update domains and IP addresses , unpack and save it to /etc/squid
$wget -c https://ffteixeira.net/blog/sites/default/files/malware_block_list.tar_.bz2 (rename malware_block_list.tar_.bz2  to malware_block_list.tar.bz2) || wget -c http://terminal28.com/wp-content/uploads/2013/10/malware_block_list.tar.bz2
$sudo tar -xvf malware_block_list.tar.bz2
$sudo mv malware_block_list /usr/local/bin/
$sudo chmod +x /usr/local/bin/malware_block_list
$sudo touch  /var/log/malware_block_list.log

Add script malware_block_list to Cron.
$sudo crontab -e

add
    
@daily /usr/local/bin/malware_block_list

Logfile location: /var/log/malware_block_list.log.   Go to  MalwarePatrol.net, click tab: Block List. You should see subscription list: free and paid. Click Free/Subscribe. Subscribe the list. You should get password/receipt number on email. Log in to: https://www.malwarepatrol.net/login.php; and find Squid Web Proxy ACL and click Download. You will be redirected to website/text with malware list. Every subscription has unique receipt number receipt=f1234567890. https://lists.malwarepatrol.net/cgi/getfile?receipt=f1234567890&product=8&list=squid Copy URL and paste to script near link. Edit: link, user, pass.

$sudo nano /usr/local/bin/malware_block_list

link='PASTE_LINK_FROM_MALWAREPATROL.NET'
user='–http-user=USERNAME'
passwd='–http-passwd=PASSWORD'

Note: Change squid3 to squid

##————————————- My test config. —————————
#!/bin/sh
### ###
###
### Squid3 Installation and Configuration.
###
### Polish version
###
### http://man.sethuper.com/instalacja-squid-proxy-serwer-clamav-squidclamav-c-icap-serwer-debian-6-0-x
###
#=======================================================================================================================
###
### English version
###
### http://terminal28.com/how-to-install-and-configure-squid-proxy-server-clamav-squidclamav-c-icap-server-debian-linux/
###
### ###

# If you don't want to log wget debug output remove "$debug" in line (51) "fetchcmd"

## Setings
# Malware patrol URL with unique ID
# Change ID after receipt in link (..getfile?receipt=f138125701..)
link='https://lists.malwarepatrol.net/cgi/getfile?receipt=f1502379316&product=8&list=squid'

# HTTP USER
user='–http-user=<user>'

# HTTP PASSWORD
pass='–http-passwd=<passwd>'

# Checking certificate
cert='–no-check-certificate'

# File location for Squid
target='/etc/squid/malware_block_list'

# Reload Squid
reloadcmd='/usr/sbin/squid -k reconfigure'

# Temporary file
tmp="/tmp/.malware_block_list.$$"

# Wget debud
#debug="-nva /var/log/squid/malware_block_list.log"

# Command for download malware list
#I remove de debug because error
#fetchcmd="wget -q  –no-check-certificate  $link -O $tmp $user $pass $debug"
fetchcmd="wget -q  –no-check-certificate  $link -O $tmp $user $pass"

# ——-

# Log file
logs='/var/log/squid/malware_block_list.log'

## execution
##
echo "$(date -R) Downloading new malware_block_list" >> "$logs"

# Downloading new malware_block_list from Malware Patrol
$fetchcmd

# Checking temporary file – "OK" – before overwrite old malware list
if [ ! -s $tmp ]
then
echo "$(date -R) The temporary file '$tmp' does not exist or is empty; resignation" >> "$logs"
exit
fi

# moving malware_black_list to directory /etc/squid3/
cp  $tmp $target

# removing temporary file
rm $tmp

# restart Squid
$reloadcmd
##———————————————————————————

 
$sudo sh /usr/local/bin/malware_block_list

4. Install Clamav-server.
$sudo apt-get install clamav-daemon
$sudo mkdir install
$cd install
$sudo wget https://sourceforge.net/projects/c-icap/files/c-icap/0.5.x/c_icap-0.5.2.tar.gz/download -O c_icap-0.5.2.tar.gz
$sudo tar -xvf c_icap-0.5.2.tar.gz
$cd c_icap-0.5.2
$sudo ./configure
$sudo make
$sudo make install
$cd ..

Edit configfile  /usr/local/etc/c-icap.conf.
$sudo nano /usr/local/etc/c-icap.conf

Change:

Line 223: ServerAdmin root@localhost
Line 232: ServerName YourServerName

Add at line 708:

Service squidclamav squidclamav.so

4.1 C-ICAP server autostart script.
$wget -c  https://ffteixeira.net/blog/sites/default/files/c-icap-autostart.tar_.gz (rename c-icap-autostart.tar_.gz  to c-icap-autostart.tar.gz) || wget -c http://terminal28.com/wp-content/uploads/2013/10/c-icap-autostart.tar.gz
$sudo tar xvf c-icap-autostart.tar.gz
$sudo rsync -avh init.d default /etc
$sudo update-rc.d c-icap defaults

4.2 Create logrotate script for c-icap server.
$sudo cat << EOT > /etc/logrotate.d/c-icap

/usr/local/var/log/server.log /usr/local/var/log/access.log {
     daily
     rotate 4
     missingok
     notifempty
     compress
     create 0644 root root
     postrotate
     /etc/init.d/c-icap force-reload > /dev/null
     endscript
}
EOT

4.3 Change permission for c-icap logrotate script and server logs.
$sudo chmod 644 /etc/logrotate.d/c-icap
$sudo chown root:root /etc/logrotate.d/c-icap
$sudo chmod 644 /usr/local/var/log/ -R
$sudo chown root:root /usr/local/var/log/ -R
$sudo ln -s /usr/local/var/log/server.log /var/log/server.log
$sudo ln -s /usr/local/var/log/access.log /var/log/access.log

5. Install Squidclamav
$cd install
$wget -c https://sourceforge.net/projects/squidclamav/files/squidclamav/6.15/squidclamav-6.15.tar.gz/download -O squidclamav-6.15.tar.gz
$sudo tar zxvf squidclamav-6.15.tar.gz
$cd squidclamav-6.15
$sudo ./configure
$sudo make
$sudo make install
$cp -rf cgi-bin /usr/lib/
$chmod +x /usr/lib/cgi-bin/clwarn* -R
$chown www-data:www-data /usr/lib/cgi-bin/clwarn* -R
$cd ..
$sudo ldconfig

5.1 Configure squidclamav.
$sudo nano /usr/lcocal/etc/squidclamav.conf

Add redirect URL – default script – clwarn.cgi (en). You can choose diferent language: DE, FR, BR, RU.

Line 18: redirect http://YourServerName/cgi-bin/clwarn.cgi

Make sure the rule occurs in configfile.

Line 27: clamd_local /var/run/clamav/clamd.ctl

6. Checking config file – ClamAV, make sure the rule occurs in configfile.
$sudo nano /etc/clamav/clamd.conf

Line 4: LocalSocket /var/run/clamav/clamd.ctl

Configure Freshclam.
$sudo nano /etc/clamav/freshclam.conf

Line 22: SafeBrowsing true

6.1 Register on Securiteinfo.com: https://www.securiteinfo.com/clients/customers/signup
Subscribe basic list for clamav. You should get auto generated urls for clamav database under tab: Setup.
Download allowed from 1 IP address, limited to 24 downloads per day
Add generated URLS to freshclam.conf file at the end.

DatabaseCustomURL http://www.securiteinfo.com/get/signatures/3b4d0…5764/securiteinfo.hdb
DatabaseCustomURL http://www.securiteinfo.com/get/signatures/3b4b…eafd/securiteinfo.ign2
DatabaseCustomURL http://www.securiteinfo.com/get/signatures/3b4d0d…61eafd/javascript.ndb
DatabaseCustomURL http://www.securiteinfo.com/get/signatures/34d…81f/spam_marketing.ndb
DatabaseCustomURL http://www.securiteinfo.com/get/signatures/3b…61eafd/securiteinfohtml.hdb
DatabaseCustomURL http://www.securiteinfo.com/get/signatures/3b…365afd/securiteinfoascii.hdb

Restart ClamAV.
$sudo /etc/init.d/clamav-daemon restart

7. Configure Squid with C-ICAP. Configuration for Squid version – 3.1.20.
$sudo nano /etc/squid/squid.conf

Add at the end of the file

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_service service_req reqmod_precache bypass=1 icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=1 icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all

Configuration for Squid version – 3.1.6.
$sudo nano /etc/squid/squid.conf

Add at the end of the file

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Client-Username
icap_preview_enable on
icap_preview_size 1024
adaptation_service_set service_req
icap_service service_req reqmod_precache bypass=1 icap://127.0.0.1:1344/request
adaptation_access service_req allow all

adaptation_service_set service_resp
icap_service service_resp respmod_precache bypass=0 icap://127.0.0.1:1344/response
adaptation_access service_resp allow all

Run C-ICAP server.
$sudo /usr/local/bin/c-icap &

8. Restart Squid.
$sudo chown -R proxy:proxy /var/spool/squid
$sudo squid -z
$sudo service squid restart

9. Configure firewall – masquerade, prerouting.
Enable forwarding. Edit configfile sysctl.conf
$sudo nano /etc/sysctl.conf

Uncomment IPv4 i IPv6 and change to 1:

Line 28: net.ipv4.ip_forward = 1
Line 33: net.ipv6.conf.all.forwarding = 1

##——————- Not used ———————
9.1. Configure firewall – iptables.

$sudo nano /etc/iptables.up.rules

Add rules (Change address IP and network interface)

*nat

-A PREROUTING -p tcp -m tcp -i eth1 –dport 80 -j REDIRECT –to-ports 3128
-A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE

 ##—————————————————–

10. Test.

If you have done it right then..
.. go to: http://www.eicar.org/85-0-Download.html and try to download file:

eicar.com
68 Bytes

Result:
You should be redirected to:

    http://YourServerName/cgi-bin/clwarn.cgi, http://YourServerName/error.html.

11. Sarg and squidguard

Credits

SARG ( Squid Analysis Report Generator ) Installation & Configuration

It is an Open-Source tool, which helps us analyze Squid Proxy logs & generates reports in HTML format with all the information from logs presented in nice & easy to understand format.
& It gives information about User’s IP addresses , total & individually used bandwidth etc with access to Daily, Weekly & Monthly reports.

Installation
The process for installing sarg on Centos/Redhat is a bit complicated, as it needs to be compiled from source. To do that, firstly we need to install required packages to download & compile the package

$ sudo apt-get install -y gcc lighttpd sarg

Now that’s the installation is complete, we will configure it as per our needs by making changes in configuration file

$ sudo nano /etc/sarg/sarg.conf

Firstly, uncomment the line starting with access_log & add path for squid access log. Next, provide output directory for reports next to line starting with output_dir & also select your desired time format, change  line with date_format

#—————————————————————-sarg.conf————————————————————-
# TAG: access_log file
# Where is the access.log file
#
#
access_log /var/log/squid/access.log
Add output directory
# TAG: output_dir
# The reports will be saved in that directory
#
#
output_dir /var/www/html/squid-reports
Set the correct date format
# TAG: date_format
# Date format in reports: e (European=dd/mm/yy), u (American=mm/dd/yy), w (Weekly=yy.ww)
#
date_format e
#——————————————————————————————————————————————–

& lastly , set overwrite report to yes

#————————————————————–sarg.conf—————————————————————

# # TAG: overwrite_report yes|no
# yes – if report date already exist then will be overwritten.
# no – if report date already exist then will be renamed to filename.n, filename.n+1
#
overwrite_report yes
##——————————————————————————————————————————————

$ sudo mkdir /var/www/html/squid-reports

Generating report
To create squid analysis report, we have to enter following command

$ sudo sarg -x

Note: It may take a while depending on number of users accessing squid proxy.

Accessing report

To access the report, enter below mentioned URL in web-browser
http://IP-Address of server/squid-reports

Now, we have all the squid analyzed logs in nice, sorted &easy to understand format

Note: you can also create a cron–job to schedule a report being generated automatically at the time of your choosing.

$ sudo crontab -e

Add for example this line at the end

 * */4 * * * /usr/bin/sarg -x

This will generate a report every 4th hour.

 

Credits

How to fix “System program problem detected” error on Ubuntu

The error "System program problem detected" comes up when a certain application crashes. Ubuntu has a program called Apport that is responsible for detecting such crashes and upon user consent, report these crashes to developers. This process intends to get the problem fixed by the developers.

However it can be very annoying to common users, and there is no point in showing errors to users when they cannot do anything about it themselves. So you might want to disable them.

"system program problem detected "

 

 

Remove crash report files

The apport system creates crash report files in the /var/crash directory. These crash report files cause the error message to appear everytime Ubuntu boots.

$ cd /var/crash
$ ls
_opt_google_chrome_chrome.1000.crash
_usr_lib_chromium-browser_chromium-browser.1000.crash
_usr_sbin_ulatencyd.0.crash
_usr_share_apport_apport-gtk.1000.crash

Just remove the crash report files

$ sudo rm /var/crash/*

After removing all the crash report files, the error message should stop popping up. However if a new crash takes place then it would appear again in future.

Turn off apport

After removing the old crash reports, if you still get the same error message, then you can completely turn off apport to get rid. Edit the configuration file at /etc/default/apport.

$ gksudo gedit /etc/default/apport

The file would contain something like this

# set this to 0 to disable apport, or to 1 to enable it
# you can temporarily override this with
# sudo service apport start force_start=1
enabled=1

Just set the value of enabled to 0, and this will disable apport.

enabled=0

Save the file and close it. From the next boot onwards, there should be no error messages ever. If you do not want to restart the system then restart apport from the command line.

$ sudo restart apport

 

Credits

How to remove mate-desktop from Debian

Uninstall mate-desktop

To remove just mate-desktop package itself from Debian execute on terminal:

sudo apt-get remove mate-desktop

 

Uninstall mate-desktop and it's dependent packages

To remove the mate-desktop package and any other dependant package which are no longer needed from Debian Jessie.

sudo apt-get remove –auto-remove mate-desktop

 

Purging mate-desktop

If you also want to delete configuration and/or data files of mate-desktop from Debian Jessie then this will work:

sudo apt-get purge mate-desktop

 

To delete configuration and/or data files of mate-desktop and it's dependencies from Debian Jessie then execute:

sudo apt-get purge –auto-remove mate-desktop

install SquidGuard! at debian 9

Install SquidGuard!
1. sudo apt-get install squidguard
2. sudo mkdir /opt/3rdparty
3. cd /opt/3rdparty    
We are going to use the list from shalalist.de for “testing”, since it’s 100% free for non-commerical.  For a bigger and much more through blacklist, I use http://urlblacklist.com/. It’s free to try once, and has different pricing tiers for person/school/business.
3. sudo wget http://www.shallalist.de/Downloads/shallalist.tar.gz
4. sudo tar xzf shallalist.tar.gz
5. sudo cp -a /opt/3rdparty/BL/porn /var/lib/squidguard/db
    sudo cp -a /opt/3rdparty/BL/adv /var/lib/squidguard/db
    sudo cp -a /opt/3rdparty/BL/spyware /var/lib/squidguard/db

6. Add this to  /etc/squid3/squid.conf , type “sudo nano /etc/squid3/squid.conf”
url_rewrite_program /usr/bin/squidGuard –c /etc/squidguard/squidGuard.conf
7. sudo squidGuard -C all
8. chown -R proxy:proxy /var/lib/squidguard/db

Edit the squidGuard.conf

Backup of your squidGuard.conf then making a new one..
1. sudo cp /etc/squidguard/squidGuard.conf /etc/squidGuard.conf.bak
2. sudo rm /etc/squidguard/squidGuard.conf
3.sudo nano /etc/suqidgurd/squidGuard.conf
Copy and paste this,

#—————————–squidGuard.conf—————————————————————
#
# CONFIG FILE FOR SQUIDGUARD
#
dbhome /var/lib/squidguard/db
logdir /usr/local/squidGuard/logs
    dest porn {
    domainlist porn/domains
    urllist porn/urls
}
dest adv {
    domainlist adv/domains    
    urllist adv/urls
}
dest spyware {
    domainlist spyware/domains
    urllist spyware/urls
}
acl {
    default {
        pass !porn !adv !spyware all
        redirect http://localhost/block.html
        }
}

#————————————————————————————————————

You can test your squidguard by doing a dry run
sudo echo "http://www.pornhub.com 10.50.55.10/- – GET" | squidGuard -c /etc/squidguard/squidGuard.conf –d

You should see,
squidGuard ready for requests
squidGuard stopped
If there are errors, it will tell you.. The most likely errors you’ll run into are permission issues.. If it gives you permission issues with your database, make sure that you set the user and group named “proxy” ownership. You can tell that by “sudo ls -l /var/lib/squidguard/db*”

You can now use the Firefox browser you setup to use with your proxy server to make sure you are blocking porn and ads. For better protection, I recommend using the blacklist from,  http://urlblacklist.com/

Finish with:
cd /var/lib/squidguard/db/; squidGuard -C all ; chown proxy:proxy -R /var/lib/squidguard/db/ ; squid -k reconfigure;  service squid restart

Credits

Explicit vs. Transparent Proxy

A proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems. Today, most proxies are web proxies, facilitating access to content on the World Wide Web and providing anonymity.1

In an explicit proxy configuration, the client (e.g. browser, desktop application etc.) is explicitly configured to use a proxy server, meaning the client knows that all requests will go through a proxy. The client is given the hostname/IP address and port number of the proxy service. When a user makes a request, the client connects to the proxy service and sends the request. The disadvantage to explicit proxy is that each client must be properly configured to use the proxy.

In a transparent proxy configuration, the proxy is typically deployed at the Internet gateway and the proxy service is configured to intercept traffic for a specified port. The client (e.g. browser, desktop application etc.) is unaware that traffic is being processed by a proxy. For example, a transparent HTTP proxy is configured to intercept all traffic on port 80/443. The typical benefits of a transparent proxy include a standard enterprise configuration where all clients routed to the internet will always be filtered and protected no matter what the end users do, or change, on their machines and the added benefit of reduction in typical user’s client-proxy configuration troubleshooting.

Credits

KVM/Virsh

You can create, delete, run, stop, and manage your virtual machines from the command line, using a tool called virsh. Virsh is particularly useful for advanced Linux administrators, interested in script or automating some aspects of managing their virtual machines

 

Installing

Install virsh:

sudo apt-get install libvirt-bin

 

Connecting

Connect to your hypervisor. This can be local, or even remote. In most cases, if you want to manage VMs running on the local hypervisor:

$ virsh connect qemu:///system
Connecting to uri: qemu:///system

 

Listing VMs

 

$ virsh list
 Id Name                 State
----------------------------------
  1 foo                  running

 

Creating a Virtual Machine

Virtual Machines managed by virsh are created by describing the virtual machine in a libvirt XML file, and importing that XML file into virsh.

You can export the XML of an existing virtual machine:

$ virsh dumpxml foo > /tmp/foo.xml
Connecting to uri: qemu:///system

And then edit /tmp/foo.xml, which should be rather straightforward. For more information about libvirt XML format, see:

Once you have an XML file describing the new virtual machine you want to create, import it into virsh, and run it immediately:

$ virsh create /tmp/foo_new.xml 
Connecting to uri: qemu:///system
Domain foo_new created from /tmp/foo_new.xml
$ virsh list
Connecting to uri: qemu:///system
 Id Name                 State
----------------------------------
  3 foo_new              running

Alternatively, if you want to define it, but not run it, you could have used:

$ virsh define /tmp/foo_new.xml

 

Working with a Running Virtual Machine

Once a virtual machine is running, you can manage it in many different ways, such as:

$ virsh start foo

 

$ virsh reboot foo

 

$ virsh shutdown foo

 

$ virsh suspend foo

 

$ virsh resume foo

You can also affect the memory, dynamically attach devices, interfaces, modify the networking configuration, etc. This guide in this wiki page is clearly not comprehensive. For a complete description of virsh commands, see:

$ man virsh

 

Console

Sometimes, it's useful to attach to the console of a running VM, to obtain debugging information, etc.

$ virsh console foo
Connected to domain foo
Escape character is ^]

 

Details

To view the details about a particular virtual machine:

$ virsh dumpxml foo

These can be saved to a file, modified, and imported again using:

$ virsh define foo

 

Deleting a Virtual Machine

To delete a virtual machine, first terminate it (if running), and then undefine it:

$ virsh destroy foo_new
$ virsh undefine foo_new

 

Credits

Linux list a KVM vm guest using virsh command

Use the following command to list a kvm vm on Linux based server:
# virsh list

To list inactive & active VM/domains:
# virsh list --all
Here are all other options:

Option Description
--inactive list inactive domains
--all list inactive & active domains
--transient list transient domains
--persistent list persistent domains
--with-snapshot list domains with existing snapshot
--without-snapshot list domains without a snapshot
--state-running list domains in running state
--state-paused list domains in paused state
--state-shutoff list domains in shutoff state
--state-other list domains in other states
--autostart list domains with autostart enabled
--no-autostart list domains with autostart disabled
--with-managed-save list domains with managed save state
--without-managed-save list domains without managed save
--uuid list uuid’s only
--name list domain names only
--table list table (default)
--managed-save mark inactive domains with managed save state
--title show domain title

Credits