Photo by Ryoji Iwata on Unsplash

Photo by Ryoji Iwata on Unsplash

Security risks from insufficient logging and monitoring

Turning a Blind Eye

Article from ADMIN 48/2018
By
Although inadequate logging and monitoring cannot generally be exploited for attacks, it nevertheless significantly affects the level of security.

Whether or not an application or a server logs something is initially of no interest to an attacker; neither is whether or not someone evaluates the logged data. No attack technique allows the server to be compromised because of a lack of logging. Nor is it possible to use missing log monitoring directly for attacks against users. The only thing that has happened so far has been direct attacks by logfiles: If a cross-site scripting (XSS) vulnerability allows the injection of JavaScript malware into logfiles and the administrator evaluates the logfiles with a tool that executes JavaScript, an attack is possible (e.g., by manipulating the web application with the administrator's account or by infecting the computer with malicious code through a drive-by infection).

Knowing Nothing Is a Weakness

Despite the apparent insignificance of logs in system security, "Insufficient Logging & Monitoring" made it into the Open Web Application Security Project (OWASP) 2017 Top 10 [1] in 10th place, whereas the cross-site request forgery (CSRF) attack, which can cause actual damage, is in 13th place [2]. CSRF attacks got the lower rating because most web applications are now developed using frameworks, and most of them now include CSRF protection. In fact, CSRF vulnerabilities have only been found in about five percent of applications. Another reason for the ranking is that, although insufficient logging and monitoring cannot be exploited directly for attacks, it contributes significantly to the fact that attacks that take place are not detected, which plays into the hands of the attackers.

How much does a penetration test show? The pen tester's actions should be logged so extensively that the attack and its consequences can be traced. If this is not the case, you will have a problem in emergencies: You will either not detect an attack at all or not determine its consequences correctly.

Not detecting attacks is often a problem, even if the attempts are unsuccessful, because most attacks start with the search for a vulnerability. If these attack attempts are not detected and subsequently stopped, they will eventually guide the attacker to the target. In 2016, it took an average of 191 days for a successful attack (data breach) to be detected, and the attack took an average of 66 days [3]. Both are more than enough time for the attacker to cause massive damage.

Monitoring Protects

Figure 1 shows a sample network, including a DMZ, where both the web server (with the application, database, and media servers it uses) and the mail server reside. Additionally, you will find a local network with a computer for monitoring production and a large number of clients, plus the central file, database, and directory service servers.

Figure 1: The potential targets in a sample network.

Attackers will see numerous possibilities for attacking the corporate network: They could bring the web server under their own control, then compromise other servers in the DMZ, and work their way forward into the local network. If the web server allows file uploads, they could upload an image file with an exploit that compromises an employee's computer when the employee opens the file. The attacker could send email to an employee with malware attached. Alternatively, they could infect the laptop of a sales representative outside the protected network with malware that works its way through a local server when the laptop is then connected to the corporate network. The bad guy also could spoof a website typically visited by employees within the framework of a "waterhole attack" and load it with a drive-by infection.

If just one of these attacks is not detected and stopped, the attacker has gained a foothold into their first computer on the corporate network and can continue to penetrate from there until they finally reach a computer with sensitive data. At that point, the attacker might also be able to route the data out of the network without being noticed, because if the attack is not detected, the company's data loss prevention systems will not typically be any better.

Such targeted attacks are usually not even the biggest problem, because they do not occur so often. Web servers are often attacked because cybercriminals want to use them to spread malware. The website operator's local network does not typically interest these attackers. However, the local network is constantly threatened by malware.

Typical Gaps in Monitoring

Insufficient logging, inadequate detection of security incidents, and insufficient monitoring and response can raise their ugly heads in many places on a web server, including:

  • Verifiable events that are not logged (e.g., logins, failed logins, and critical transactions).
  • Warnings and errors that generate either no logfile entries or entries that are insufficient or unclear.
  • Application and API logfiles that are not monitored for suspicious activity.
  • Logfiles that are only stored locally and can be manipulated by the attacker (i.e., central evaluation is not possible).
  • Inadequate warning thresholds and no adequate escalation process or ineffective existing processes.
  • Penetration tests and scans with security tools (Dynamic Application Security Testing, DAST) that do not generate alerts, such as the OWASP Zed Attack Proxy [4], which means even a real attack would not set off an alarm.
  • Applications unable to detect, escalate, or warn against active attacks in real time, or at least near real time. (See also the "Web Application Protection" box.)

Web Application Protection

Commercial and open source tools for application protection, (e.g., the OWASP AppSensor [5], which provides application-based intrusion detection, or the ModSecurity web application firewall [6], which can be configured with the OWASP ModSecurity Core Rule Set [7]) generate sufficient logfiles if correctly configured. You will, of course, also have to evaluate the logs, with the special log correlation software that often comes with customizable dashboards and various alerting functions. The best known example of this kind of software is probably Nagios [8].

Even if these cases do not occur in an application or installation, a pitfall still exists: Logging and alerting events that are visible to the user, and thus to a potential attacker, make the attacker's job easier. The application then has an information leak, which ranks third in the OWASP Top 10 as "Sensitive Information Exposure."

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=