content
stringlengths
194
506k
Overview of the System Log Viewer Page You can use the System Log Viewer window to see a list of all system logs that have been generated. You can apply filters to the list to see only certain log entries. The filters you can use are: - Severity (critical, error, warning, or information) - Error Code To filter the system log list - In the Hosted Console website, click Configuration > System Log Viewer. - Apply any combination of the following filters to the list: - In the Severity box, select the check box for each severity for which you want to see the system logs having the specified severity. - In the Error Code box, type the error code of the logs you want to see. - In the Module box, type the module (Monitoring, for example) for which you want to see system logs. - Select a time frame using the date and time pickers. - Click Filter. You can click a system log entry in the list to view details about it in the Message Details dialog box. To view the message details - In the Time column of the log list, click the log entry. - While viewing the system log message details, if desired, you can click Up or Down to scroll through the other system log entries to view their message details. - When you are done viewing the message details, click OK.
Content-Centric Networking (CCN) is a network architecture for transferring named content from producers to consumers upon request. The name-to-content binding is cryptographically enforced with a digital signature generated by the producer. Thus, content integrity and origin authenticity are core features of CCN. In contrast, content confidentiality and privacy are left to the applications. The typically advocated approach for protecting sensitive content is to use encryption, i.e., restrict access to those who have appropriate decryption key(s). Moreover, content is typically encrypted once for identical requests, meaning that many consumers obtain the same encrypted content. From a privacy perspective, this is a step backwards from the ``secure channel'' approach in today's IP-based Internet, e.g., TLS or IPSec. In this paper, we assess the privacy pitfalls of this approach, particularly, when the adversary learns some auxiliary information about popularity of certain plaintext content. Merely by observing (or learning) the frequency of requested content, the adversary can learn which encrypted corresponds to which plaintext data. We evaluate this attack using a custom CCN simulator and show that even moderately accurate popularity information suffices for accurate mapping. We also show how the adversary can exploit caches to learn content popularity information. The adversary needs to know the content namespace in order to succeed. Our results show that encryption-based access control is insufficient for privacy in CCN. More extensive counter-measures (such as namespace restrictions and content replication) are needed to mitigate the attack.
Adopting IaaS: Tips and Best Practices for Cloud Transformation Many organizations are choosing to adopt cloud and hybrid cloud architectures to integrate with infrastructure-as-a-service (IaaS) solutions. It’s easy to see why, given the many benefits and advantages. These include: - Flexibility to pay for only what is used and provisioned; - Economy of scale, which enables sharing of the investments across branches; - Vendor-provided, cost-effective and efficient IT maintenance and operation; and - Increased speed for faster innovation. An effective cloud transformation requires engagement from all stakeholders across the organization. It is important to consider the company’s overall culture and security posture during the implementation process. Regarding security, expectations vary depending on the type of cloud the organization adopts. If it chooses to focus on IaaS, its main objective is to cut IT expenses and complexity without sacrificing security in the infrastructure or access processes. With a platform-as-a-service (PaaS) solution, however, the focus shifts to securing applications and data, as well as being in compliance with regulations. Security should always be on the table whenever an organization decides to transform its infrastructure or adopt a new service, from the design phase to the final implementation. Cloud solutions introduce many advantages, but with these advantages come more complex security concerns. When building a service, it’s important to consider all security controls to reduce the risk and increase the efficiency of each element. This means securing infrastructure and applications, managing identities and access, and securing all elements involved in the execution of a transaction. This also applies to services that have elements stored in the cloud. If a cloud service provider supplies the infrastructure, for example, it’s important to ensure the vendor will also provide the security controls. Of course, this type of infrastructure can impact the service-level agreement and cause customers to demand transparency. If the infrastructure supplied by the cloud service provider comes under attack, for example, it’s important to alert the customer’s security information and event management (SIEM) provider. About the Security Controls Just because the application or database installed on the IaaS is secure doesn’t mean the overall service is secure. It’s usually ideal to have a single database protection capability with visibility and control over the database, cloud and on-premises tools. The security controls in an effective IaaS program should include the ability to: - Manage data center identities and access. - Authenticate, authorize and manage users. - Secure and isolate virtual machines (VM). - Patch default images for compliance. - Monitor logs on all resources. - Isolate networks. Failing to check any one of the above boxes means compromising the security of workloads moved onto the IaaS system. An effective IaaS provides customers with a multilayer security strategy. Managing Data Center Identities and Access IBM SoftLayer, for example, employs security staff to monitor its access sites 24/7. Access to the data center requires an ID badge and biometric authentication. Then each level of the data center requires its own set of credentials, and only staff members whose roles require access will be permitted to enter a given level. All access is logged and monitored by closed-circuit television. Authentication and Authorization An effective IaaS solution includes policies that enable clients to create and manage user accounts and assign privileges. It should be able to check source IP addresses, prevent users from accessing the portal, and monitor activity to implement and effectively manage an access policy. IaaS provides user management and granular access/permissions capabilities for elements provided by the platform, including servers, storage and networks. Many solutions rely on the client to create and delete portal users. If the client is managing the servers, the service provider would defer to the client for this process. If the provider is managing the servers, it should work with the client’s team to ensure the proper user accounts and privileges are available to the correct personnel. What are the risks associated with insecure VM isolation? What are the consequences of isolation failure? In short, they are multiple and serious: Cybercriminals can leverage weak VM isolation to manipulate assets inside the cloud IaaS. In a VM hopping attack, for example, bad actors compromise one VM to gain access to the other VMs located on the same hypervisor. The attackers use this access to switch off the system, compromise data and replicate multiple VMs to jack up the cost to the customer. Patch management involves patching shared devices, such as switches and routers, within a period consistent with security best practices. A highly automated cloud environment can accomplish patching by the time a new compliant server or workload migration starts up. An IaaS solution should provide monitoring and incident management services. It should also establish explicit policies and processes for logging security events. Logging capabilities must include: - Ongoing monitoring and management; - Monitoring of network traffic using various techniques; and - Analysis of security logs generated from the platform component related to irregular, suspicious activities. Logged alerts should be handled in a timely manner and, if applicable, communicated to clients. The support and incident response teams should notify clients of any activity related to the infrastructure. IaaS solutions often use firewalls to control internet access to VMs. These capabilities enable clients to build secure, internet-facing environments, support shared and dedicated firewalls, and supply additional network controls.
I often hear about cloud-based security solutions that solve all security problems. It’s a simple fact that such an animal does not exist. Why? Because the problem domains are just too different. Therefore, security requirements are different as well. If you try to push the same security solution across all workloads, you’ll find it doesn’t work across them all — and that’s if you’re lucky. If you’re not lucky, you won’t know until it’s too late where the solution doesn’t work. Your applications are built with very different programming engines, databases, and middleware, and all those attributes help determine the type of security solution you should use. That brings in (necessary) complexity, which makes using “standard” security tools and processes an impossibility most of the time.
Download now Free registration required Researchers show that network coding can greatly improve the quality of service in P2P live streaming systems (e.g., IPTV). However, network coding is vulnerable to pollution attacks where malicious nodes inject into the network bogus data blocks that will be combined with other legitimate blocks at downstream nodes, leading to incapability of decoding the original blocks and degradation of network performance. In this paper, the authors propose a novel approach to limiting pollution attacks by identifying malicious nodes. In their scheme, the malicious nodes can be rapidly identified and isolated, so that the system can quickly recover from pollution attacks. - Format: PDF - Size: 530.5 KB
First ransomware in D H. S. Teoh hsteoh at quickfur.ath.cx Tue Feb 2 14:01:06 UTC 2021 On Tue, Feb 02, 2021 at 01:34:35PM +0000, user1234 via Digitalmars-d wrote: > On Tuesday, 2 February 2021 at 11:46:39 UTC, RazvanN wrote: > > Looks like D has managed to make programs safer... malicious > > programs that is. > > https://www.bleepingcomputer.com/news/security/vovalex-is-likely-the-first-ransomware-written-in-d/ > I hope that the signatures wont wrongly affect phobos and druntime. > That's a risk when everything is statically linked (which seems to be > the case here as a comment mentions that the malware size is 32 mb). The bad thing about this is that antivirus software may start labelling D programs as malware because they may latch on to phobos/druntime fragments as identifiers for this ransomware. Famous last words: I *think* this will work... More information about the Digitalmars-d
A honey monkey is a program that imitates a human user to lure, detect and identify malicious activity on the Internet. By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA. According to Microsoft, who developed the concept, a honey monkey is an active client honey pot. The honey monkey behaves like a highly active and extremely unwary human Internet user, logging onto many suspect websites. The programs detect harmful coding that could jeopardize the security of human visitors. Certain types of websites are more likely to contain malicious coding, whether by design or as a result of hacking. Favored targets include the home pages of celebrities, sites that offer downloadable music and videos (particularly those that operate in violation of copyright law), pornographic sites and gaming cheater sites. Sophisticated hackers operate according to the principle of "minimizing the effort and maximizing the results." Effective honey monkeys take advantage of the same paradigm, scanning the Web for URLs most likely to be compromised. In some cases, individual hackers can be personally identified. Microsoft developed a Web patrol system called Strider HoneyMonkeys to detect Web sites that frequently install spyware, Trojans and viruses on the computers of Internet users. Microsoft's system consists of multiple monkey programs running on virtual machines (VMs). Host systems have a range of patch levels to detect specific types of exploits. In addition to identifying and isolating uniform resource locators (URLs) that propagate malware, a program called Strider Tracer can detect configuration and file changes that occur following an exploit. Using this method, interconnected communities of Web sites have been discovered that use targeted URLs to exploit client-side vulnerabilities on unpatched computers. Once such a site and the nature of its activity has been identified, a patch is generated to counter the threat. In the first month of activity, the HoneyMonkey project detected malicious coding on 752 unique URLs, hosted on 287 sites. Researchers were able to identify several "major players," each of whom is responsible for many exploit pages.
To simulate the existence of barriers that would induce link breakages, we exploited NS2a#39;s ability to read a Digital Elevation ... the NS2 code that governs the receipt of wireless packets from the MAC layer to make use of this topographical data. ... If on the other hand a barrier does not exist then there must be a valid link between the source and destination nodes. ... 6.3 Evaluation We ran each simulation using the AODV and DSDV protocols respectively for a total of twelve simulations. |Title||:||Proceedings of the ... International Workshop on Mobility Management & Wireless Access Protocols|
Adding authentication methods The SSO Agent authentication method must be configured on your firewall so that users can authenticate on an authentication domain. You can configure up to five SSO Agent authentication methods. - Log in to the firewall's administration interface: https://firewall_IP_address/admin, - Go to Configuration > Users > Authentication, Available methods tab. - Click on Add a method and select SSO Agent in the drop-down menu. - In the section on the right, select the relevant authentication domain from the drop-down list in the Domain name field. - Continue with the configuration section by section according to the parameters below. Enter the information about the main SN SSO Agent: Select from the list the host object that corresponds to the SN SSO Agent created earlier. Leave the object agent_ad selected by default. Enter the SSLKey defined when SN SSO Agent was installed. This key is used to encrypt exchanges between SN SSO Agent and the firewall in SSL. The strength of the pre-shared key indicates the password’s level of security. Add all the LDAP directories that control the authentication domain concerned. They must be saved beforehand in the firewall’s Network objects database. For more information, refer to the section Creating network objects. Mode: since SN SSO Agent is installed on a Linux machine, select Syslog server mode. Syslog server configuration: |Listening IP address|| Select from the list the host object associated with the machine that hosts SN SSO Agent and its syslog server. Select from the list the port object representing the listening port on the syslog server. The object syslog is selected by default (UDP port 514) |IP address search|| Regular expression that will be used to search for IP addresses in logs hosted on the syslog server. For this technical note, we used: Regular expression that will be used to search for user names in logs hosted on the syslog server. For this technical note, we used: Replace “DOMAINNAME” with the authentication domain used. Remember to protect special characters, if you are using any. Regular expression that will be used to search for connection messages in logs hosted on the syslog server. For this technical note, we used: Ensure that the format of this regular expression is correct so that you do not include unnecessary results in the search. For more information on these elements, refer to the SNS user guide. Maximum authentication duration: define the maximum length of an authenticated user’s session. After this period expires, the firewall will delete the user associated with this IP address from its table of authenticated users, logging the user out of the firewall. This limit is defined in minutes or hours, and is set by default to 10 hours. Refresh user groups updates: for every LDAP directory configured, the firewall will check for any changes to the LDAP directory groups. The firewall then updates the configuration of its directory, and sends back this information to SN SSO Agent. This limit is defined in minutes or hours, and is set by default to 1 hour. Disconnection detection: enable the disconnection method so that authenticated users can be deleted when a host is disconnected or when a session is shut down. If this method is not enabled, the user will be logged out when the maximum authentication period expires, even when the session has been shut down. SN SSO Agent tests the accessibility of all hosts authenticated on the firewall by pinging them every 60 seconds. To ensure the success of these tests: - Workstations on the authentication domain must allow responses to pings (ICMP requests). The Windows firewall may block such requests in some cases. - A rule in the firewall’s filter policy must allow SN SSO Agent to test hosts on the authentication domain if the agent must access it through the firewall. Consider offline after: if a host does not respond within the time frame set in the “Disconnection detection” test conducted every 60 seconds, SN SSO Agent will consider this host offline. The agent will then send a disconnection request to the firewall, which will delete the user from its table of authenticated users, logging the user out of the firewall. This duration defined in seconds or minutes is set by default to 5 minutes. Enable DNS host lookup: enable this setting if the hosts connected to the firewall have several IP addresses or their addresses change regularly. This setting may be useful, for example, if your users often switch from an Ethernet configuration to a Wi-Fi connection. Periodically, SN SSO Agent will perform DNS requests (PTR) to check that machines have not changed their IP addresses. If there is a new IP address, the information will be sent to the firewall. To ensure the success of these tests: - A Reverse lookup zone (right-click on the folder) must be added to the settings of the DNS server for the authentication domain, A rule in the firewall’s filter policy must allow SN SSO Agent to test hosts on the authentication domain if the agent must access it through the firewall. Ignored administration accounts: in the firewall’s factory configuration, the authentication of this list of users is ignored. This list contains the usual logins dedicated to the administrator (Administrator and Administrateur by default). This mechanism was set up because the LDAP directory treats the execution of a service or an application (Run as administrator feature, for example) as an authentication. As SN SSO Agent restricts authentication by IP address, this type of authentication may potentially replace the authentication of the user with an open session. The pre-defined list of “Ignored Administrator accounts” allows SN SSO Agent to ignore their authentication. Edit it if necessary.
Filler bots are incentivized to listen for events, and fill orders, because they earn profit whenever they fill an order. These filler bots have wallet addresses with private keys and public keys, as does the user, and SushiSwap. In order to execute the swap itself, the filler bots call the smart contract for the pair requested by the user, if/when the price reaches the value set in the limit order. The smart contract in turn will fill the order by sending funds to the users wallet, and in the process, it sends the limit order fee to the filler bot as profit. This exchange of funds is essentially an exchange of balances, that is communicated via an encryption. The magic of public key cryptography allows for these messages between the users’ wallets, the filler bots wallet, and SushiSwaps smart contracts to pass securely, because the private keys of these entities are obscured by cryptographic algorithms that can only be solved in one direction . The messages passed between them can only fill the order to the parameters initialized by the user, because decrypting them requires having access to a private key, which is only visible to the entity who initialized the order (the user).
Header manipulation is used when specific components within SIP messages need to be modified. The reason for header manipulation are: - To resolve SIP protocol variances between different vendors - To hide SIP topology by removing VIA headers Header Manipulation Actions You can modify non-essential headers in SIP messages using header and parameter profiles. The following information summarizes the supported actions: - Pass the header unchanged (whitelist functionality). - Conditionally pass the header unchanged. - Remove the header (blacklist functionality). - Conditionally remove the header. - Replace the name of the header. The replacement name cannot be that of a vital header. - Conditionally replace the header content (appearing after the “:”). - Add a new instance of a header to a message regardless of whether or not the header already exists. - Add the first instance of the header to the message, if a header with this name does not already exist. Header Manipulation Operation When the SIP profile has header manipulation for ingress configured, SIP headers get modified, then the call is sent to the routing engine. When the SIP profile has header manipulation for egress configured, SIP header get modified as the call leaves the SIP profile. Header Manipulation Configuration Options Similarly to call routing, there are two ways of configuring header manipulation rules: - WebGUI/Basic Header Manipulation - Advanced XML Header Manipulation
The more complex a network is, the more vulnerable it is to attacks. In times when customers and partners have access to internal network structures via the internet, and can control various applications via web interfaces, IT employees are encountering more and more problems. Large companies, in particular, prefer to use penetration testing to check how well their security concept works. This [...] Businesses use IDSs (intrusion detection systems) and firewalls in order to keep attackers away from sensitive IT systems. These safeguards can be enhanced through so-called honeypots, which bait hackers to isolated network areas where more information on their attack patterns can be collected. Find out more here on how honeypots work and with which programs honeypots can be implemented for both [...] Tracking tools can provide website operators with a useful indication on how to adapt an online project to suit a target group. These tools focus on user profiles, which reveal how users find the website, and which content provokes interactions. This information is based on user data, which can be subject to stringent data protection guidelines in some countries within the European Union. Find out [...] FileZilla is considered to be a standard software for transferring data between locally connected computers and online servers. The network protocol is able to transfer via FTP and its encrypted variants, SFTP and FTPS. We’ve created an overview of the client program’s features and break down the details of this program from installation to data transfer. Being constantly faced with headlines about stolen passwords, it’s understandable that many users are concerned. Your best bet is to make your passwords as complicated as possible and have them consist of many different types of characters. But even this won’t help if it’s the actual log-in area that isn’t secure enough. Even today, attackers are still successful with the notorious and simple [...] Companies hosting their own websites, an online shop, or e-mail inboxes should make sure to separate the corresponding server of these from the local network. This makes it possible to protect devices within the company network from hacker attacks that take place on public networks. Solid protection comes in the form of a demilitarized zone, which separates endangered systems from sensitive [...] Most computer users are at least aware of the term firewall. When activated, they help protect computers. But announcements about blocked applications can become a source of irritation for many users, especially when the background information for such messages is unknown. But how do firewalls work? And what role do hardware firewalls play in protecting your computer? A big problem with the previous Internet Protocol version, IPv4, was the missing guarantee of security standards of integrity, authenticity, and confidentiality. This previous protocol lacked the necessary means to identify data sources or enable secure transport. The protocol suite IPsec, developed for IPv4’s successor, IPv6, has changed the situation for Internet Protocol overnight. Practically every PC user fears Trojan horses and computer viruses. Security is paramount if you are managing sensitive data or setting up a server. You need a comprehensive security concept to protect yourself against insidious malware. It’s helpful to know the different types of malicious software that exist, and how to combat and safely remove them.
Supplementary Table 6 from Construction and forensic application of 20 highly polymorphic microhaplotypes datasetposted on 05.05.2020 by Aliye Kureshi, Jienan Li, Dan Wen, Shule Sun, Zedeng Yang, Lagabaiyila Zha Datasets usually provide raw data for analysis. This raw data often comes in spreadsheet form, but can be any collection of data, on which analysis can be performed. The genotypes and CPI values of 12 parent/child duos based on microhaplotype and STR.
- Anomaly-based intrusion detection system An Anomaly-Based Intrusion Detection System, is a system for detecting computer intrusions and misuse by monitoring system activity and classifying it as either "normal" or "anomalous". The classification is based on heuristicsor rules, rather than patterns or signatures, and will detect any type of misuse that falls out of normal system operation. This is as opposed to signature based systems which can only detect attacks for which a signature has previously been created. In order to determine what is attack traffic, the system must be taught to recognise normal system activity. This can be accomplished in several ways, most often with artificial intelligencetype techniques. Systems using neural networkshave been used to great effect. Another method is to define what normal usage of the system comprises using a strict mathematical model, and flag any deviation from this as an attack. This is known as strict anomaly detection. Cfengine- 'cfenvd' can be utilized to do anomaly detection RRDtool- can be configured to flag anomalies * [ftp://ftp.cerias.purdue.edu/pub/papers/sandeep-kumar/kumar-intdet-phddiss.pdf CLASSIFICATION AND DETECTION OF COMPUTER INTRUSIONS] thesis by Sandeep Kumar for Purdue University * [http://artofhacking.com/files/phrack/phrack56/P56-11.TXT A strict anomaly detection model for IDS, Phrack 56 0x11, Sasha/Beetle] * [http://www.cfengine.org/docs/cfengine-Anomalies.html Anomaly detection with cfenvd and cfenvgraph] * [http://cricket.sourceforge.net/aberrant/rrd_hw.htm Notes on RRDTOOL implementation of Aberrant Behavior Detection] Wikimedia Foundation. 2010. Look at other dictionaries: Intrusion detection system — An intrusion detection system (IDS) is a device or software application that monitors network and/or system activities for malicious activities or policy violations and produces reports to a Management Station. Some systems may attempt to stop … Wikipedia Intrusion prevention system — Intrusion Prevention Systems (IPS), also known as Intrusion Detection and Prevention Systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention… … Wikipedia Intrusion detection — In Information Security, intrusion detection is the act of detecting actions that attempt to compromise the confidentiality, integrity or availability of a resource. When Intrusion detection takes a preventive measure without direct human… … Wikipedia Anomaly detection — Anomaly detection, also referred to as outlier detection refers to detecting patterns in a given data set that do not conform to an established normal behavior. The patterns thus detected are called anomalies and often translate to critical … Wikipedia Система обнаружения вторжений — (СОВ) программное или аппаратное средство, предназначенное для выявления фактов неавторизованного доступа в компьютерную систему или сеть либо несанкционированного управления ими в основном через Интернет. Соответствующий английский… … Википедия CFEngine — Developer(s) Mark Burgess, CFEngine AS Stable release 3.2.3 / October 25, 2011 Pre … Wikipedia Network security — In the field of networking, the area of network security consists of the provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and… … Wikipedia Cfengine — Infobox Software name = Cfengine caption = developer = Mark Burgess latest release version = 2.2.8 latest release date = Aug 5, 2008 latest preview version = latest preview date = operating system = Cross platform platform = Unix, Linux, Windows… … Wikipedia OSSIM — For the GIS project, see Open Source Geospatial Foundation. OSSIM OSSIM Web Framework Developer(s) AlienVault Stable release 3.0.0 / September 6, 2 … Wikipedia Denial-of-service attack — DoS redirects here. For other uses, see DOS (disambiguation). DDoS Stacheldraht Attack diagram. A denial of service attack (DoS attack) or distributed denial of service attack (DDoS attack) is an attempt to make a computer resource unavailable to … Wikipedia
Performance Comparison of Wireless Mobile AD-HOC Networks There are new challenges for routing protocols in Mobile Ad-hoc NETworks (MANETs) since traditional routing protocols may not be suitable for MANETs. As such, some assumptions used by these protocols are not valid in MANETs or some protocols cannot efficiently handle topology changes. The Efficient routing protocols can provide significant benefits to mobile ad-hoc networks, in terms of both performance and reliability. The most popular routing protocols are Ad-hoc On-demand Distance Vector (AODV), Destination-Sequenced Distance-Vector routing protocol (DSDV), Dynamic Source Routing protocol (DSR), and Optimum Link State Routing (OLSR). Despite the popularity of those protocols, research efforts have not focused much in evaluating their performance when applied to Variable Bit Rate (VBR).
In studying the Imperium, Arrakis, and the whole culture which produced Maud’Dib, many unfamiliar terms occur. To increase understanding is a laudable goal, hence the definitions and explanations given below. —Dune, Frank Herbert acceptable use policy (AUP): A policy that defines for all parties the ranges of use that are approved for use of information, systems, and services within an organization. access control: The process of granting or denying specific requests (1) for accessing and using information and related information processing services and (2) to enter specific physical facilities. Access control ensures that access to assets is authorized and restricted based on business and security requirements.
What is MITRE ATT&CK and How Can it Help Your Security?How MITRE's evaluation can help empower end-users, provide product transparency, and motivate enhanced capabilities. By: Paul Diorio and Lee Lawson "Language shapes the way we think and determines what we can think about." Benjamin Lee Whorf, Famed Linguist on the Sapir-Whorf Hypothesis MITRE ATT&CK Shapes How Security Professionals Think About Security While linguistic theory typically focuses on natural languages and their impact on human thought, a parallel can be drawn to how security professionals describe and share knowledge to combat adversary tactics, techniques, and procedures - the language of cybersecurity attacks. With this perspective, many practitioners within the security industry advocate for a common language to describe the cybersecurity threats faced by organizations every day. The language used to describe these threats would significantly shape the way we think and determine how we approach a holistic defense. In recent years, the MITRE ATT&CK framework has increasingly become that common language. It has gained significant influence over how modern security teams describe threat actor capabilities and subsequently translate defensive ideas into action. In our experience building Red Cloak™ TDR, we have found significant benefits leveraging the ATT&CK framework language to drive innovation and develop our security analytic platform. Participating in the 2019 MITRE ATT&CK Evaluation of Red Cloak TDR advanced that goal one step further by teasing out some additional opportunities that our platform could leverage to keep our customers more secure. 2019 MITRE ATT&CK Evaluation Shines a Light... MITRE launched the framework for ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) in 2015 to codify a common language to describe adversary actions. Today, many organizations are adopting the ATT&CK framework to better understand their coverage and explain their security program strategy. Similarly, most commercial security product vendors have shifted towards using ATT&CK to describe how they might best fit within enterprise security programs. In response, MITRE created an evaluation program around security products focused on empowering end-users with insights on how to operationalize those products against known adversary attacks, provide independent transparency on the capabilities of security products, and motivate product vendors to enhance their capabilities against adversary behaviors. For the benefit of transparency to our customers, and our commitment to continually innovate against adversaries, the Red Cloak TDR team recently participated in the 2019 MITRE evaluation to showcase our capabilities. In coming weeks, the MITRE ATT&CK evaluation findings will be available to help security professionals determine which products meet their needs. Undoubtedly, it will also serve as points for product marketing teams to tout new features and describe competitive differentiators. More importantly, the key insights from the evaluation should help security analysts better understand how to leverage specific products to combat real-world threats. Regarding 2019, the evaluation simulated IRON HEMLOCK (AKA APT-29, Cozy Bear) as the model threat actor. Security teams can leverage ATT&CK to think about key visibility points within their environment, as well as overall detection coverage and strategies. It is important to note that the 2019 MITRE ATT&CK evaluation is constrained to endpoint products only, and while these solutions play a critical role in defending against modern threats, they do not provide a total solution on their own. Red Cloak TDR acknowledges this reality by integrating data from a wide variety of sensors and visibility providers, including endpoint agents, network sensors, firewalls, proxies, public cloud provider APIs, and more. While there will be more information available as MITRE finalizes their findings in the coming weeks, our team wanted to highlight some more immediate benefits for our product and customers. Through the preparation and execution of the evaluation, our team gleaned new insights on how to operationalize ATT&CK as a catalyst for enabling new capabilities. … And Motivates Innovation One of MITRE's stated goals in the creation of the framework and evaluation is to push the security vendor community to enhance their abilities to detect known adversary behaviors. For example, our team uncovered novel opportunities for detections with Red Cloak TDR while preparing for the evaluation through a close collaboration between our threat intelligence researchers, incident responders, penetration testers, countermeasure creators, data scientists, and big data engineers. In preparation, our interdisciplinary team was able to simulate live attacks based on our knowledge of the IRON HEMLOCK threat actor and rapidly work to generate insights and build new capabilities. Our purple team approach showcases our ability as a security leader to leverage that experience to empower your own security teams to protect your organization with Red Cloak TDR. As a specific example, the Red Cloak Agent can collect data from Event Tracing for Windows (ETW) on the endpoint and now applies Red Cloak TDR countermeasures to hunt for adversary activity within it (especially IRON HEMLOCK inspired attacks). Examples of these capabilities include: - PowerShell Script Block Logging reconstruction allows for analysts to see the entire PowerShell script being executed by the adversary as well as which functions were executed. - Windows Management Instrumentation (WMI) detections for malicious use of WMI, including collection of custom events. In addition to empowering human analysis, these sources also enable automated detection techniques within our Red Cloak TDR platform. We have institutionalized our interdisciplinary process to continually push towards new data sources and creation of new analytic insights to deliver bleeding edge security value to our customers. The Red Cloak TDR team joined the 2019 MITRE ATT&CK evaluation with the full intent of showcasing our approach to providing industry leading security value. Along the way, we joined an elite cadre of pressure-tested security vendors, delivered product improvements focused on APT-level TTPs, and we improved our understanding of how ATT&CK evaluation results can help security teams evaluate solutions for their organization. At the end of the day, MITRE's efforts are raising the collective security bar and we are proud to meet the challenge.
cloud computing, network security, IaaS, life cycle, network policy Network security requirements based on virtual network technologies in IaaS platforms and corresponding solutions were reviewed. A dynamic network security architecture was proposed, which was built on the technologies of software defined networking, Virtual Machine (VM) traffic redirection, network policy unified management, software defined isolation networks, vulnerability scanning, and software updates. The proposed architecture was able to obtain the capacity for detection and access control for VM traffic by redirecting it to configurable security appliances, and ensured the effectiveness of network policies in the total life cycle of the VM by configuring the policies to the right place at the appropriate time, according to the impacts of VM state transitions. The virtual isolation domains for tenants’ VMs could be built flexibly based on VLAN policies or Netfilter/Iptables firewall appliances, and vulnerability scanning as a service and software update as a service were both provided as security supports. Through cooperation with IDS appliances and automatic alarm mechanisms, the proposed architecture could dynamically mitigate a wide range of network-based attacks. The experimental results demonstrate the effectiveness of the proposed architecture. Tsinghua University Press Lin Chen, Xingshu Chen, Junfang Jiang et al. Research and Practice of Dynamic Network Security Architecture for IaaS Platforms. Tsinghua Science and Technology 2014, 19(05): 496-507.
userv is a Unix system facility to allow one program to invoke another when only limited trust exists between them. It is a tool for system administrators, who often find themselves with a program running as one user which needs to be able to do certain things as another user. For example, the author's machine's news system needs to scan its users' newsrcs to ensure that the right newsgroups are fetched. Before userv that part of the news system had to run as root, and clumsily use `su'. Swatch was originally written to actively monitor messages as they were written to a log file via the UNIX syslog utility. It has multiple methods of alarming, both visually and by triggering events. The perfect tools for a master loghost. It is known to work flawlessly on Linux (RH5), BSDI, and Solaris 2.6 (patched). boclient is a remote windows administation tool which uses BackOrifice or NetBus servers on Windows. It is an improvement of version 1.21. Most recent versions have GNU readline support, NetBus commands, portability to other platforms (BeOS, QNX and 64bit architectures like Alpha) and async network I/O. LOMAC uses Low Water-Mark Mandatory Access Control to protect the integrity of processes and data from viruses, trojan horses, malicious remote users, and compromised network server daemons. The LOMAC loadable kernel module can be used to harden Linux systems without any changes to existing kernels, applications, or configuration files. Due to its simplicity, LOMAC itself requires no configuration, regardless of the users and applications present on the system. Although some features and fixes remain to be implemented, LOMAC presently provides sufficient protection to thwart some attacks, and is stable enough for everyday use. dirtypgp is a quick-and-dirty wish script to run in a X Window environment. It is a workslate upon which clear or cipher text may be cut and pasted. A series of button controls then are used to convert to and from ciphered and clear text, encoded with the PGP package. It was originally written by Carsten Meyer, who released it under the GPL. The main goal of the Linux Trustees project is to create an advanced permission management system for Linux. The solution proposed is mainly inspired by the approach taken by Novell Netware and the Java security API. Special objects (called trustees) can be bound to every file or directory. The trustee object can be used to ensure that access to a file, directory, or directory with subdirectories is granted (or denied) to a certain user or group (or all except user or group). Trustees are like POSIX ACLs, but trustee objects can affect entire subdirectory trees, while ACLs a single file. Trustees works with the 2.6 Linux kernel. Libmcrypt is a library which provides a uniform interface to several symmetric encryption algorithms. It is intended to have a simple interface to access encryption algorithms in ofb, cbc, cfb, and ecb modes. The algorithms it supports are DES, 3DES, RIJNDAEL, Twofish, IDEA, GOST, CAST-256, ARCFOUR, SERPENT, SAFER+, and more. The algorithms and modes are also modular so you can add and remove them on the fly without recompiling the library. The OpenCA Project is a collaborative effort to develop a robust, full-featured and Open Source out-of-the-box Certification Authority implementing the most used protocols with full-strength cryptography world-wide. OpenCA is based on many Open-Source Projects. Among the supported software is OpenLDAP, OpenSSL, Apache Project, Apache mod_ssl.
Fake Banking Apps that posing to be from three major Indian banks made way into the official Google Play store. The malicious claiming to increase the credit limit of the three banks. Attackers use bogus phish forms to collect the credit card details and internet banking credentials from the victims. The Fake Banking Apps appears to be uploaded to Google play between June and July 2018 and it was downloaded by thousands of users. Security researchers from ESET spotted the apps uploaded under three different developer names, but they are linked to a single attacker. How Do the Fake Banking Apps Steal Credentials The Fake banking apps pose to be from three major Indian banks ICICI, RBL, and HDFC. When the user launches the apps it opens a bogus form requesting credit card details, once user submitted the information it asks for their internet banking login credentials. After submitting the forms with or without entering credentials it leads the users to the final screen and shows “Customer Service Executive” and the app offers no other functionality. The login forms are not well designed, users who already experienced with the legitimate version of apps can easily distinguish the difference. The more worst part is that the stolen information from bogus sent to the server in plain text and the server has no authentication which lets anyone access the information. If you have installed the app it’s time to remove and check your bank account for suspicious activities. Fake Android apps evolving rapidly every day, cybercriminals use numbers of methods to upload malicious apps in play store and continuously targeting users. Common Defences and Mitigations Give careful consideration to the permission asked for by applications. Download applications from trusted sources. Stay up with the latest version. Encrypt your devices. Make frequent backups of important data. Install anti-malware on their devices. Stay strict with CIA Cycle.
Chapter 21. level The logging level from various sources, including rsyslog(severitytext property), a Python logging module, and others. emerg, system is unusable. alert, action must be taken immediately. crit, critical conditions. err, error conditions. warn, warning conditions. notice, normal but significant condition. debug, debug-level messages. The two following values are not part of syslog.h but are widely used: trace, trace-level messages, which are more verbose than unknown, when the logging system gets a value it doesn’t recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging, you can match err, and so on.
Cached pages are categorized as Anonymizing Utilities since this cached datas allowes to access data which might be blocked normally by other categories. Example. http:/www.sex.com is blocked by your policy due to category sex. If someone fetches the cache in google he is able to do this since google is categorized as Search Engine.... So accessing this cache will bypass your policy. To prevent this google cache is categorized also in Anonymizing utilities. If you want to allow this you should a new rule before your categorization ruleset take place. The easiest way might be to allow google generally.
In the past, I have recommended that you never permit users to download any Word file with the .doc file extension from external e-mail because of the macro virus possibility. Even if they think they know who sent the file, your network could be in danger since the sender’s system may have been hijacked by a virus and manipulated into sending a corrupted file. The macro threat can be just as dangerous when you have users in larger companies who have never met the fellow employee sending them a file. Although it may be construed as extremist, requiring users to send and receive messages employing the .rtf file format is a simple solution that lets you continue to use macros. If you don’t think you can enforce a no-.doc file extension rule, another solution is to use a digital certificate to authenticate the origin of any files. Certificates perform two major functions: - They protect files from tampering; you know the macro you activate is identical to the one the originator intended for you to get. Nothing’s been added or removed. - They can be used to identify the sender so your security settings will let the macro run. Definitions of digital certificate and signature According to Microsoft documentation: - A digital certificate is an attachment to a macro project, used for security purposes. A digital certificate names the macro’s source, plus additional information about the identity and integrity of that source. To digitally sign macro projects, you need to obtain and install a digital certificate that you can use to sign macro projects to identify them as your work. - A digital signature is a digital stamp of identification on a macro that confirms that the macro originated from the developer who signed it and that the macro hasn’t been altered. To digitally sign macro projects and identify them as your work, you need to obtain and install a digital certificate. When you open a file or load an add-in that contains a digitally signed macro, a digital signature appears on your computer as a certificate that names the macro’s source, plus additional information about the identity and integrity of that source. Three ways to obtain digital certificates User digital certification: This is the easiest way to deal with the problem of obtaining a certificate. However, whether or not you can use the digital certificate method depends on the security level settings and protocols observed by your company. In many smaller organizations, self-certification is the simplest route to improving macro security dramatically without adding burdensome layers of management to the process. MS Office includes a digital certificate program. To access it, run the Office setup program and look in Office Tools on the Selecting Features screen. Macros are really just pieces of Visual Basic code, so select Digital Signatures For VBA projects and Run From My Computer. For more help, look under Obtain A Digital Certificate in Word’s help index on your system. To determine whether the Office 2000 SelfCert tool is installed on your system, use the Find Files Or Folders tool to search for a file called selfcert.exe. As a security administrator, you could delete this program from all users’ systems after you create a certificate for them. Internal certificate authority: The second, and often better, way for users to obtain a digital certificate is for the security administrator to act as an internal certificate authority and distribute digital certificates using the Microsoft Certificate Server. Security policies must be established to determine who can issue certificates. Also, steps must be taken to control which users are authorized to sign their own macros and which ones an administrator must approve. Commercial certification authorities: The third way to obtain a digital certificate is to use an external authority such as VeriSign. You can learn more about certification authorities at the MS Security Advisor Web site. This is a complex process because it is used to guarantee the origin of your work to everyone on the Internet, not just users within your organization. A Class 3 digital certificate is only for commercial software publishers and requires measures such as a Dun & Bradstreet credit check. The Authenticode Technology Web page has additional background on using digital certificates with Microsoft products. Using digital certificates You won’t need to buy any new tools, but as with all security measures, there are costs involved in creating certificates and either certifying macros yourself or training users to do it. Once you have a digital certificate or have distributed certificates to authorized users, your next step is to set the security level (high, medium, or low) that specifies how your Word or Excel program will treat macros. If you open the Tools menu and select Macro | Security, you will see descriptions of the three security levels. Now you’re ready to start signing macros. If there are only a few authorized developers in your organization, it is probably easiest if the security administrator checks each macro, locks it, and affixes the certificate. This way, if the developer is using his or her individual certificate, the origin can easily be tracked. Open a document with the macro. Next, open the VBE (Visual Basic Editor) by pressing [Alt][F11] and select the project. Now select Digital Signature from the Tools menu and choose from the available certificates. If you don’t lock the macro and any changes are made, the certificate disappears. This is far from foolproof, and developers who know how this works can always “steal” the macro and put their own certificate on it. However, this is not a way of enforcing copyright; it merely provides insurance that your users have a solid layer of protection against macro viruses. John McCormick is a security consultant and technical writer (five books and 15,500-plus articles and columns) who has been working with computers for more than 35 years. Have a comment? If you’d like to share your opinion, please post a comment below or send the editor an e-mail.
System and Methods for Detection of Adverse Events Detecting adverse or anomalous events in a captured video or a live video stream is a challenging task due to the subjective definition of ""anomalous"" as well as the duration of such events. Anomalous events are usually short-lived and occur rarely. We present an unsupervised solution for this problem. Our method is able to capture the video segment where the anomaly happens via the analyses of the interaction between the spatially co-located interest points. The evolution of their motion characteristics is modeled, and abrupt changes are used to temporally segment the videos. Spatiotemporal and motion features are then extracted to model standard events and identify the anomalous segments using a one-class classifier. Professor, Computer Science
The Value of Honeypots Now that we have defined honeypots and how they work, we can attempt to establish their value. As mentioned earlier, unlike mechanisms such as firewalls and intrusion detection systems, a honeypot does not address a specific problem. Instead, it is a tool that contributes to your overall security architecture. The value of honeypots and the problems they help solve depend on how you build, deploy, and use them. Honeypots have certain advantages and disadvantages that affect their value. In this chapter we will examine those advantages and disadvantages more closely. We will also look at the differences between production and research honeypots and their respective roles. Advantages Of Honeypots Honeypots have several advantages unique to the technology. We will review four of them here. One of the challenges the security community faces is gaining value from data. Organizations collect vast amounts of data every day, including firewall logs, system logs, and Intrusion Detection alerts. The sheer amount of information can be overwhelming, making it extremely difficult to derive any value from the data. Honeypots, on the other hand, collect very little data, but what they do collect is normally of high value. The honeypot concept of no expected production activity dramatically reduces the noise level. Instead of logging gigabytes of data every day, most honeypots collect several megabytes of data per day, if even that much. Any data that is logged is most likely a scan, probe, or attackinformation of high value. Honeypots can give you the precise information you need in a quick and easy-to-understand format. This makes analysis much easier and reaction time much quicker. For example, the Honeynet Project, a group researching honeypots, collects on average less then 1MB of data per day. Even though this is a very small amount of data, it contains primarily malicious activity. This data can then be used for statistical modeling, trend analysis, detecting attacks, or even researching attackers. This is similar to a microscope effect. Whatever data you capture is placed under a microscope for detailed scrutiny. For example, in Figure 4-1 we see a scan attempt made against a network of hon-eypots. Since honeypots have no production value, any connection made to a honeypot is most likely a probe or attack. Also, since such little information is collected, it is very easy to collate and identify trends that most organizations would miss. In this figure we see a variety of UDP connections made from several systems in Germany. At first glance, these connections do not look related, since different source IP addresses, source ports, and destination ports are used. However, a closer look reveals that each honeypot was targeted only once by these different systems. Analysis reveals that an attacker is doing a covert network sweep. Figure 4-1 Covert network sweep by an attacker picked up by a network of honeypots He is attempting to determine what systems are reachable on the Internet by sending UDP packets to high ports, similar to how traceroute works on Unix. Most systems have no port listening on these high UDP ports, so when a packet is sent, the target systems send an ICMP port unreachable error message. These error messages tell the attacker that the system is up and reachable. The attacker makes this network sweep difficult to detect because he randomizes the source port and uses multiple source IP addresses. In reality, he is most likely using a single computer for the scan but has aliased multiple IP addresses on the system or is sniffing the network for return packets to the different systems. Organizations that collect large amounts of data would most likely miss this sweep, since multiple-source IP addresses and source ports make it hard to detect. However, because honeypots collect small amounts of, but high-value data, attacks like these are extremely easy to identify. This demonstrates one of the most critical advantages of honeypots. Another challenge most security mechanisms face is resource limitations, or even resource exhaustion. Resource exhaustion is when a security resource can no longer continue to function because its resources are overwhelmed. For example, a firewall may fail because its connections table is full, it has run out of resources, or it can no longer monitor connections. This forces the firewall to block all connections instead of just blocking unauthorized activity. An Intrusion Detection System may have too much network activity to monitor, perhaps hundreds of megabytes of data per second. When this happens, the IDS sensor's buffers become full, and it begins dropping packets. Its resources have been exhausted, and it can no longer effectively monitor network activity, potentially missing attacks. Another example is centralized log servers. They may not be able to collect all the events from remote systems, potentially dropping and failing to log critical events. Because they capture and monitor little activity, honeypots typically do not have problems of resource exhaustion. As a point of contrast, most IDS sensors have difficulty monitoring networks that have gigabits speed. The speed and volume of the traffic are simply too great for the sensor to analyze every packet. As a result, traffic is dropped and potential attacks are missed. A honeypot deployed on the same network does not share this problem. The honeypot only captures activities directed at itself, so the system is not overwhelmed by the traffic. Where the IDS sensor may fail because of resource exhaustion, the honeypot is not likely to have a problem. A side benefit of the limited resource requirements of a honeypot is that you do not have to invest a great deal of money in hardware for a honeypot. Honeypots, in contrast to many security mechanisms such as firewalls or IDS sensors, do not require the latest cutting-edge technology, vast amounts of RAM or chip speed, or large disk drives. You can use leftover computers found in your organization or that old laptop your boss no longer wants. This means that not only can a honeypot be deployed on your gigabit network but it can be a relatively cheap computer. I consider simplicity the biggest single advantage of honeypots. There are no fancy algorithms to develop, no signature databases to maintain, no rulebases to misconfigure. You just take the honeypot, drop it somewhere in your organization, and sit back and wait. While some honeypots, especially research honey-pots, can be more complex, they all operate on the same simple premise: If somebody or someone connects to the honeypot, check it out. As experienced security professionals will tell you, the simpler the concept, the more reliable it is. With complexity come misconfigurations, breakdowns, and failures. Return On Investment When firewalls successfully keep attackers out, they become victims of their own success. Management may begin to question the return on their investment, as they perceive there is no longer a threat: "We invested in and deployed a firewall three years ago, and we were never attacked. Why do we need a firewall if we have never been hacked?" The reason they were never hacked is the firewall helped reduce the risk. Investments in other security technologies, such as strong authentication, encryption, and host-based armoring, face the same problem. These are expensive investments, costing organizations time, money, and resources, but they can become victims of their own success. In contrast, honeypots quickly and repeatedly demonstrate their value. Whenever they are attacked, people know the bad guys are out there. By capturing unauthorized activity, honeypots can be used to justify not only their own value but investments in other security resources as well. When management perceives there are no threats, honeypots can effectively prove that a great deal of risk does exist. For example, once I was in Southeast Asia conducting a security assessment for a large financial organization. I was asked to do a presentation for the Board of Directors on the state of their security. As always, I had a honeypot running on my laptop. About 30 minutes before the presentation, I connected to their network to make some last-minute changes. Sure enough, while I was connected to their network, my system was probed and attacked. Fortunately, the honeypot captured the entire attempt. In this case the attack was a Back Orifice scan. When the attacker found my system, they thought it was infected and executed a variety of attacks, including attempting to steal my password and reboot the system. I then went with this captured attack and used it to open my presentation to the Board. This attack demonstrated to the Board members that not only did active threats exist but they tried, and succeeded, in penetrating their network. It is one thing to talk about such threats, but demonstrating them, keystroke by keystroke, is far more effective. This proved extremely valuable in getting the Board's attention.
- 1 months ago DMARC is popular as domain-based message authentication, reporting, and conformance. It is a sort of e-mail validation of messages that allows people to detect and prevent any email spoofing. prevent fraudulent emails from being sent to the customers using the company domain names. DMARC replaces the use of SPF and DKIM protocols that did not let people control their email channels with the desired results. DMARC includes three different settings; monitor quarantine and reject so the organizations need to decide rightly about how they should handle the authentication of emails.
A strong form Android malware returned and spread through SMS Phishing attacks. Malware can steal bank data, personal information, private communications And much more. It is called FakeSpy and has been active since 2017. Originally targeted users in Japan and South Korea, but is now targeting users Android worldwide. Depending on the purpose, the necessary changes are made to deceive users across Asia, Europe and North America. FakeSpy is constantly evolving and "evolving fast". A new version of malware is released every week, with new features and avoidance techniques. Android malware works as stealer information. It steals SMS, financial information, application data and accounts, while also reads contact lists And much more. The recent campaign targets users in China, Taiwan, France, Switzerland, Germany, United Kingdom, USA and other countries. Android malware tries to install on the victim's device via a phishing SMS claiming to be related to a lost package from a local postal service or delivery service. There is one in phishing SMS link which directs them users in a fake website. There they are instructed to download an app that looks like it's from the local post office. For example, UK users are required to download a specially designed fake version of the Royal Mail app. The US Postal Service app is downloaded in America, the Deutsche Post in Germany, La Poste in France, the Japan Post in Japan, the Swiss Post in Switzerland and the Chughwa Post in Taiwan. Basically, by downloading these apps, users download Android malware. The fake ones applications they look a lot like the real ones. After downloading the application - which requires the user to enable it installation from unknown sources - the fake page will redirect users to the legitimate website so that they do not suspect anything. Android malware also requests many permits, which does not seem very strange because it is common in legal applications. Once installed, FakeSpy can monitors the device to steal various information: name, phone number, contacts, bank details, cryptocurrency details. It also monitors messages and applications. Android malware takes advantage of device infection to spread, sending the same phishing SMS to all the victim's contacts. Researchers argue that the attacks are not targeted. The hackers try to target as many users as possible to steal personal and especially banking data. FakeSpy has been active for the last three years and continues to be threatening for Android users as it evolves and changes. However, users can avoid falling victim to Android malware, if they are extremely careful with unexpected SMS messages, especially if they claim to be from organizations and ask the user to open links and archives. Finally, one mobile protection program, can also help identify the threat.
In this recipe, we'll explore the use of basic authentication for the Nagios Core web interface, probably the single most important configuration step in preventing abuse of the software by malicious users. By default, the Nagios Core installation process takes the sensible step of locking down the CGI scripts in its recommended Apache configuration file with standard HTTP authentication for a default user named nagiosadmin, with full privileges. Unfortunately, some administrators take the step of removing this authentication or never installing it despite the recommendations in the installation guide. It's a good idea to install it and keep it in place even on private networks, especially if ...
The majority of Joplin's development is carried out in the public domain. This includes the discussion of issues on GitHub, as well as the submission of pull requests and related discussions. The transparency of these processes allows for collaborative problem-solving and shared insights. However, there is one aspect that operates behind closed doors, and for good reason: addressing cybersecurity vulnerabilities. It is imperative that these issues remain undisclosed until they have been resolved. Once a solution is implemented, it is usually accompanied by discreet commits and a message in the changelog to signify the progress made. Typically, the process begins with an email from a security researcher. They provide valuable insights, such as a specially crafted note that triggers a bug, or an API call, along with an explanation of how the application's security can be circumvented. We examine the vulnerability, create a fix, and create automated test units to prevent any accidental reintroduction of the vulnerability in future code updates. An example of such a commit is: 9e90d9016daf79b5414646a93fd369aedb035071 We then share our fix with the researcher for validation. Additionally, we often apply the fix to previous versions of Joplin, depending on the severity of the vulnerability. The contribution of security researchers in this regard is immeasurable. They employ their ingenuity to identify inventive methods of bypassing existing security measures and often discover subtle flaws in the code that might otherwise go unnoticed. We would like to express our sincere gratitude to the security researchers who have assisted us throughout the years in identifying and rectifying security vulnerabilities! In many ways I agree with all of the words, the good-will, the assessment and the positive intentions. There is just one problem, and it's a serious one. The NSA (and similar organizations in other countries) know about these processes, and they have vital interests to undermine every widely used piece of software, not only do they get paid for it, not only do they command vast resources, and are they widely (sometimes more, sometimes less) by their government or foreign governments. They are not just hacking, pay hackers, and help each other. They do know that it is vital to work on every front. This includes writing (or contributing to) the rules of important internet standards, go to conferences and lobby for this change or against that change, provide commonly used libraries (including open source), have contacts in every circle. But worse of all this is that they - for the better of mankind - do everything to make sure that such practices will not end. Now back to the long list of security researchers - the efforts of each one of them being very much appreciated ... in general. How many of them have a known and verified C.V. and history, how many of them might have more than just one single goal. How many of them, without any ill intentions may be gettin' paid by some project in return for some .... and how many along their lifes did nothing wrong so when bribe doesn't work, extortion may do the job. Don't get me wrong. I am not accusing any single one of them. But even a person who intentionally contributed to the closure of three minor vulnerabilities, while being misguided in many ways, could easily contribute to keeping one other, serious vulnerability wide open. This does not mean that you should stop doing what you do, or change what you're doing. But it means that the worldwide management of cyber in-security will not stop at the gates of github or Joplin. It is much more reasonable to believe that both of them are high on the list of targets. Well, yes, but a lot of this is outside our control, so we do what we can do. In this particular case, we have a proof-based process, with fixes and associated test cases, that can be independently verified. Even if someone as you say found four vulnerabilities and only reported three, that's still a win, and perhaps someone else will eventually find the remaining one. As for the security researcher identity, some prefer to remain anonymous, but I don't see how that's relevant? Note that we implement the fix - a white-hat hacker doesn't have carte blanche to change the codebase as they want, they simply provide a proof of concept, which we can check and use for our fix and tests. This sound very easy because even a refrigerator can find a vulnerability! Thanks to all of these researchers sharing their work and thank you Laurent! Most vulnerabilities are due to what? A good number of them are XSS vulnerabilities, which is why we have a whole section about it in the coding style! I agree with everything you say.
OSSEC - The open source Intrusion prevention system Himanshuz.chd 270004408M Visits (11881) OSSEC is an open source host based intrusion detection and prevention system (HIPS) that performs both profile and signature based analysis to detect and prevent computer intrusions. It is backed by a company named Trend Micro. OSSEC was initially developed by Daniel B. Cid to compensate for the lack of scalability of Tripwire (used for file integrity checking). More about the story of evolution of OSSEC can be found here. An excerpt from Wikipedia that explains the journey of OSSEC project till date : In June 2008, the OSSEC project, and all the copyright owned by the project leader, Daniel B. Cid, were acquired by Third Brigade, Inc. They promised to continue to contribute to the open source community and extend commercial support and training to the OSSEC open source community. In May 2009, Trend Micro acquired Third Brigade and the OSSEC project, with promises to keep it open source and free. Capabilities and features OSSEC can perform : So we see that OSSEC has evolved quite a bit with these features. Out of all these features, one feature that stands out is the OSSEC's ability to analyse logs. OSSEC has a very powerful logs analysing engine that is capable of analysing almost every type of logs that are generated on a system. Here is an excerpt of the key feature description from OSSEC official website : Strengths and WeaknessesSome of the strengths and weaknesses of the OSSEC IPS : Here are some of Links that might be useful if you want to learn more about OSSEC :
Welcome to the webpage of the Telematics Research Group! As known, telematics is a technology that is combining the areas of telecommunications and informatics, nowadays known as Information and Communications Technology (ICT). The main research focus of the group include the following topics: Mobile Ad Hoc Networks (MANETs), Wireless Sensor Networks (WSNs), Wireless Mesh Networks (WMNs). The group is investigating the security related issues in communication networks such as wireless physical layer security, authentication, trust and reputation mechanisms and anonymity in MANETs, WSNs and WMNs. Besides, network services, such as Voice over IP (VoIP), are also considered.
Resource-based Access Control (RBAC) - A set of credentials typically corresponds to a user account, but they can be also machine-to-machine credentials. - Tenant is “a group of users who share a common access with specific privileges to a software instance”. - It can be a company, a department, a team etc. - By definition, a tenant can have multiple users (sets of credentials). - A user can join several tenants. - Role is a named set of resources, it is used to grant user access to those resources. - A user can have multiple roles. - A role can be assigned to multiple users. - Tenant roles are valid only for one specific tenant. - Global roles are valid across all tenants. - Resource is an identifier of an actual software resource or an action performed on that resource. - Having access to a resource means having rights to what it represents ( - Any resource can be assigned to several roles. - A role can have multiple resources. - Resources cannot be assigned directly to credentials; credentials can have access to a resource only through a role.
The internet of things is growing rapidly, and IoT-enabled devices are beginning to appear in all aspects of our lives. This not only impacts consumers, but also enterprises, as it is expected that over 50% of all organizations will have some form of IoT in operation in 2019. The number of IoT-connected devices has risen exponentially, and that growth shows no sign of slowing as Gartner forecasts that more than 20 billion internet-connected appliances and machines will be in use by 2020 — a number that, even now, has surpassed the world’s population. With more and more companies developing internet-enabled devices ranging from doorbells and security cameras to refrigerators and thermostats, it comes as little surprise that threat actors are discovering new vulnerabilities and developing new ways to exploit them. An active defense is the use of offensive actions to outmaneuver an adversary and make an attack more difficult and to carry out. Slowing down or derailing the attacker so they cannot advance or complete their attack increases the probability that the attacker will make a mistake and expose their presence or reveal their attack vector. Counterintelligence (CI) is the information gathered and actions taken to identify and protect against an adversary’s knowledge collection activities or attempts to cause harm through sabotage or other actions. The goal of CI is to ensure information cannot be modified or destroyed by a malicious actor and that only authorized people can access an organization’s information. CI is often associated with intelligence agencies, government organizations or the military but businesses also benefit from including CI in their approach to security. In cybersecurity, counterintelligence is used to support the information security triad of Confidentiality, Availability, and Integrity (CIA). Many organizations practice aspects of CI, but refer to it by different names, including data loss prevention (DLP), malware reverse engineering and network forensics.
Artificial Intelligence is a rapidly progressing field with potential deployments in various sectors like financial services, trading, healthcare, translation, transportation, image recognition, and more. As of late, AI systems are deemed to be high-performing and secured. To date, much of the discussion related to machine learning had to linger around identifying the potential threats to computer systems and protecting the same against these vulnerabilities through automation, etc. However, on the other hand, there had been many concerns; too, as AI was used for offensive purposes, making cyber attackers manipulate the adaptation to malware programs. Many enterprise policymakers who work in artificial intelligence think of the impact of AI in security administration. The latest report back in 2020 by the UK Artificial Intelligence Commissionerate specified incorporating AI into the cyber defense for proactive detection and mitigation of threats. This approach requires a greater speed of response than the actual human decision-making process. There are many aspects that AI can take care of if you can implement it accurately. While exploring this further, we can see a distinct set of issues that deal with how the AI systems may work securely, and not just about how this can be used to augment the data security and computer networks. If we rely on machine learning algorithms to detect and respond to cyber threats, what is more, important is that these algorithms need to be protected from any compromise or misuse. As there is an increasing dependence on AI for many crucial services and functions, it will offer a greater incentive for cyber attackers to target such algorithms. Implementing AI solutions may also respond to the rapidly evolving threats, which will further put forth the need to secure the enterprise systems. So you need to be aware about what you are using how it will eventually benefit your operations to make it better. AI has become a very important and widely used technology in many industries, so security policymakers may find it necessary to consider the intersection of AI with cybersecurity. This article will discuss some of the challenges in this area, including compromised decision-making and the AI systems being used for some malicious purposes. For secured database administration services, RemoteDBA.com is a reliable service provider to go for. Securing the Artificial Intelligence decision-making process One of the biggest security threats to AI systems is its potential for accommodating any adversaries to compromise the integrity of the decision-making process. With this, the decisions and choices may not be made properly as the user desires. One easy way to achieve this may be taking control of the system directly so that they will be able to decide what output the system may generate and what decision it will come up with. Alternatively, an attacker may also try to influence these decisions indirectly by providing some malicious inputs or adversary training to the data of an AI model. As a real-time example, we can consider adversary training to compromise the functioning of an autonomous vehicle so that it may get into an accident. An attacker can exploit the vulnerabilities in the software of the car to influence the self-driving decisions externally. By exploiting the software and remotely accessing it, one may make the car avoid a stop sign. The computer vision algorithms may not be able to recognize the stop sign as it is. This process with which an adversary can influence a system to make mistakes and manipulate the inputs is called ‘adversarial machine learning. Research had done this to find that small changes to the digital images that are undetectable to human eyes can be sufficient to cause an AI algorithm to misclassify the images completely. Another approach to manipulate the inputs is through ‘data poisoning.’ This, too, can occur when the adversaries try to train the AI models on any mislabeled or inaccurate data. The pictures of stop signs can be labeled as something different, so the algorithms may not recognize the stop signs when encountering them in real-time. Data poisoning may lead to algorithms making mistakes and doing misclassification of inputs. Even on trying to train an AI model selectively, a specific subset of the data that is labeled accurately may be sufficient to compromise the model to make inaccurate or unexpected decisions. These types of adversities may reaffirm the need to carefully control both the training data sets and those used to build AI models. The inputs of such models may then be provided to ensure the security of machine learning decision-making processes. Neither of these may be straightforward. Inputs to the AI machine learning system, in particular, may go far beyond the scope of the control of AI developers. On the other hand, the developers may typically have much greater control over the training data sets for their models. In many cases, these data sets may contain personal data or sensitive information too, which may raise another concern about how this information can be protected. All these concerns may create some tradeoffs for the developers about how the training can be done and how much access to data they have by themselves. Research done on adversarial machine learning shows that AI models needed to be more robust to data poisoning and adversarial inputs in building foolproof AI systems. These should reveal more information about individual data points also to train the models. While the sensitive data is used to train such models, it will create a new set of security risks that the adversaries will access. Trying to secure the AI models from these inference attacks may leave these data sets more susceptible to adversarial machine learning techniques and vice-versa. This means that a part of maintaining security for artificial intelligence is to navigate all the tradeoffs between these two related sets of risks. Once you understand these underlying risks at the intersection of AI with cybersecurity, you need to be very careful about the same impact in your specific use case and take appropriate measures to tackle the same.
As an IT and security auditor, I have seen the importance of DHCP logging in, ensuring network security, and troubleshooting network issues. Here are the best practices for DHCP logging that every organization should follow: 1. Enable DHCP Logging: DHCP logging should be turned on to record every event that occurs in the DHCP server. The logs should include information such as the time of the event, the IP address assigned, and the client’s MAC address. 2. Store DHCP Logs Securely: DHCP logs are sensitive information that should be stored in a secure location. Access to the logs should be restricted to authorized personnel only. 3. Use a Centralized Logging Solution: To manage DHCP logs, organizations should use a centralized logging solution that can handle logs from multiple DHCP servers. This makes monitoring logs, analyzing data, and detecting potential security threats easier. 4. Regularly Review DHCP Logs: Regularly reviewing DHCP logs can help detect and prevent unauthorized activities on the network. IT and security auditors should review logs to identify suspicious behavior, such as unauthorized IP and MAC addresses. 5. Analyze DHCP Logs for Network Performance Issues: DHCP logs can also help identify network performance issues. By reviewing logs, IT teams can identify IP address conflicts, subnet mask issues, and other network performance problems. 6. Monitor DHCP Lease Expiration: DHCP lease expiration is vital to ensure IP addresses are not allotted to unauthorized devices. DHCP logs can help to monitor lease expiration and to deactivate the leases of non-authorized devices. 7. Implement Alerting: IT and security audit teams should implement alerting options to ensure network security. By setting up alert mechanisms, they can be notified of suspicious activities such as unauthorized devices connecting to the network or DHCP problems. 8. Maintain DHCP Logs Retention Policy: An effective DHCP logs retention policy should be defined to ensure logs are saved for an appropriate period. This policy will help to provide historical audit trails and to comply with data protection laws. Following these DHCP logging best practices will help ensure the network’s security and stability while simplifying the troubleshooting of any network issues.
Wafer stands for Web Application Framework for Exploring, Exposing and Eliminating Risks. How it works This project develops and uses static analysis techniques to detect several types of security flaws in large Java Enterprise Edition (JEE) web applications. Common security flaws detected by Wafer, as listed in the OWASP Top 10, include SQL injection, cross-site scripting and path traversal. To improve its detection power, Wafer adds support for constructs that are common in web applications, but are typically hard to analyse statically. Our main challenges As with many similar projects, scalability for large codebases is a major challenge. The highly dynamic and decoupled nature of JEE web applications is another challenge for static analysis because it makes their runtime behavior unpredictable. To find out more, contact
Learn more about how dynamic approaches to security can help organizations better protect assets inside modern data centers and clouds. Security for the Modern Age– New Approaches to SDDC & Cloud Security Infrastructure trends such as software-defined everything, converged infrastructure and a hybrid cloud have dramatically increased the complexity and dynamics of the modern data center. And with it, traditional IT security practices and technologies have had a hard time keeping up. This modern SDDC security webinar will look at how dynamic approaches to security, coupled with software-defined infrastructure, can help organizations better protect assets inside modern data centers and clouds. Due to the complex and dynamic nature of modern data centers, along with very high traffic rates, organizations face significant challenges in gaining visibility into application communications and putting proper controls in place to secure east-west (server-to-server) traffic. This means that data center security architecture must be re-evaluated in order to meaningfully address security, shifting focus to application-layer visibility, granular micro-segmentation, real-time detection and automated response. - Application-layer visibility into all network and application flows that can be translated into easy-to-deploy micro-segmentation policies to better control east-west traffic. - Real-time breach detection that reduces the “noise” and false-positive rates of traditional detection technologies and enables security teams to quickly identify active, confirmed breaches. - High-interaction and dynamic threat deception technology that leverages the network infrastructure to automatically redirect intruders into an isolated environment for safe investigation. - Automated security incident analysis that compiles a full understanding of confirmed attacks and unburdens IT security teams from manual investigative tasks. - Rapid attack mitigation allowing for real-time attack isolation and remediation of infected files and servers.
Floodgate Firewall is a complete embedded firewall providing a critical layer of security for networked devices. Floodgate Modbus Packet Filtering extends the Floodgate Firewall, adding protection for devices using Modbus/TCP. Its unique design provides built-in filtering to protect the devices. This solution: - Blocks packets based on configurable rules - Controls who can send Modbus/TCP messages to the device, and what commands can be sent - Allows control and validation of individual fields within the message, and filtering of messages based on message type, content, and message source - Maintains interoperability with Modbus/TCP protocol standards Cyber Threats for Industrial Control Devices Internet-based attacks are on the rise and an increasing number of these attacks are targeting industrial devices. Modbus/TCP devices are notoriously easy targets as the protocol has no encryption, access control or other security features. Floodgate Modbus Protocol Filtering adds a layer of protection for Modbus/TCP devices to control who can communicate with the device, what communication is allowed, and to protect against malicious commands. Floodgate Modbus Filtering Provides: - Protection for Modbus/TCP systems with direct or indirect connection to the Internet - Protection from malware or attacks that originate within or outside the facility - Notification of malicious or suspicious Modbus/TCP traffic, allowing early detection of attacks - Easily configurable filtering rules - Active (block and report) or Passive (report only) modes - Filter packets based on source address, function code, and packet contents - Logging of blocked packets/policy violations - Small footprint and efficient design for embedded systems - Portable source code for use with any embedded RTOS and embedded Linux - Whitelist or blacklist filtering modes Configurable Filtering Policies Floodgate Modbus Filter uses configurable rules to control the filtering engine. The rules provide complete control over the type of filtering performed and the specific criteria used to filter packets. Rules can be configured for: - IP address filtering, to allow or block all Modbus commands from the configured IP addresses - Modbus function code filtering, to allow or block all commands based upon the Modbus function code - IP address and Modbus function code, to control what Modbus commands are allowed from a specific IP address - Control blacklist and whitelist filtering modes - Enable DPI filtering rules to validate message contents - Enable active or passive modes Floodgate Modbus Filter is integrated with Floodgate Agent, allowing configuration to be performed remotely by the Floodgate Manager or other security management system. EDSA Compliance Support Floodgate Modbus filtering provides an important building block for achieving EDSA compliance for embedded devices. Floodgate Firewall provides support for the following capabilities mandated by EDSA-311: - Protocol fuzzing and replay attack protection - Denial of service protection - Notification of attacks - Audit support Logging and Alerting Floodgate Modbus Filter maintains a log of security events and policy violations. Changes to firewall policies are also recorded in the logs enabling support for command audit requirements. Event logs can be used for forensic investigation to determine the source of an attack. Management System Integration The Floodgate Modbus Filter is integrated with the Floodgate Agent, enabling remote management from the McAfee ePO, Icon Labs Floodgate Management system or other Security Information and Event Management (SIEM) systems. This integration provides: - Centralized management of security policies - Situational Awareness and device status monitoring - Event management and log file analysis Intrusion Detection and Prevention Hackers attempting to penetrate an embedded device using remote attacks will probe the device for open ports and weaknesses. Modbus/TCP protocol filtering limits the attack surface potential hackers can exploit. Logging packets that violate configured filtering rules enables detection of unusual traffic patterns, traffic from unknown IP address or other suspicious behavior. Most cyberattacks remain undetected until it is too late. Early detection is critical to contain attacks, block and prevent theft of confidential information, prevent disruption of services, and stop proliferation of the attack to other systems.
To block specific URLs using the robots.txt file, follow these steps: - Identify the URLs you want to block: Determine the specific URLs or directories that you want to block search engines from crawling. For example, you may want to block a page like "https://example.com/private-page" or an entire directory like "https://example.com/private-directory/". - Create or edit your robots.txt file: Access your website's root directory and locate the robots.txt file. If you don't have one, create a new text file and name it "robots.txt". If you already have a robots.txt file, open it for editing. - Specify the URLs to block: Inside the robots.txt file, add the following lines to specify the URLs or directories you want to block: In the above example, the "User-agent: *" line specifies that the following rules apply to all search engines. The "Disallow:" lines indicate the URLs or directories to be blocked. - Save the robots.txt file: Save your changes to the robots.txt file and ensure it is placed in the root directory of your website. - Test your robots.txt file: After implementing the changes, test your robots.txt file using various online robots.txt testing tools to ensure that the blocked URLs are not accessible to search engines. Note: Keep in mind that while the robots.txt file can prevent search engines from crawling specific URLs, it does not provide security or prevent access by users who know the specific URL.
Detection of Mobile Replica Nodes in Wireless Sensor Networks In Wireless Sensor Networks (WSN), there are many nodes and they are unattended so an adversary can easily capture and compromise the sensor nodes and take secret key from the nodes then make many replicas (duplicate) of them. After getting the secret key from the sensor node the sensitive data which is present in the nodes get leaked so an adversary can quickly degrades the network communication. To avoid this node compromised attack the authors use Sequential Probability Ratio Testing (SPRT). In literature several compromised node detection works well in static sensor networks and they do not work well in mobile sensor networks. Using SPRT they detect the compromised node in mobile sensor networks.
A safety and security operations facility is typically a consolidated entity that deals with safety and security problems on both a technical as well as business level. It includes the entire three building blocks mentioned above: procedures, individuals, as well as innovation for boosting as well as taking care of the safety and security pose of a company. Nevertheless, it might include much more elements than these three, depending upon the nature of the business being dealt with. This post briefly reviews what each such element does and what its main functions are. Procedures. The primary goal of the protection operations facility (generally abbreviated as SOC) is to discover and also deal with the sources of risks as well as stop their repetition. By recognizing, tracking, and also correcting troubles in the process environment, this part aids to guarantee that dangers do not succeed in their purposes. The different functions as well as responsibilities of the specific components listed below emphasize the basic procedure range of this unit. They additionally illustrate just how these parts communicate with each other to recognize and gauge hazards as well as to apply solutions to them. People. There are 2 people normally involved in the process; the one in charge of uncovering vulnerabilities and the one responsible for carrying out services. Individuals inside the security procedures facility monitor susceptabilities, resolve them, and alert management to the very same. The tracking feature is split into numerous different areas, such as endpoints, notifies, e-mail, reporting, combination, and combination screening. Modern technology. The modern technology section of a protection procedures facility takes care of the discovery, identification, as well as exploitation of intrusions. Some of the innovation used here are breach detection systems (IDS), handled security solutions (MISS), as well as application safety and security monitoring tools (ASM). invasion detection systems make use of energetic alarm system alert capabilities as well as passive alarm notification capacities to spot breaches. Managed protection services, on the other hand, permit safety and security experts to create regulated networks that consist of both networked computers as well as web servers. Application protection administration tools give application security solutions to administrators. Details as well as event monitoring (IEM) are the last component of a security procedures facility and also it is included a collection of software application applications and also devices. These software program and also tools enable administrators to catch, record, and also analyze protection details as well as event management. This final component likewise allows administrators to establish the reason for a safety risk and also to respond appropriately. IEM provides application security information and also occasion management by allowing an administrator to see all protection dangers and to figure out the origin of the danger. Conformity. Among the main goals of an IES is the establishment of a danger assessment, which evaluates the degree of risk an organization deals with. It additionally includes developing a strategy to reduce that danger. All of these activities are done in conformity with the principles of ITIL. Safety and security Conformity is specified as a vital responsibility of an IES and also it is an essential activity that sustains the tasks of the Workflow Center. Operational roles and also obligations. An IES is implemented by a company’s senior monitoring, however there are a number of operational features that need to be performed. These features are separated between numerous teams. The very first team of operators is in charge of collaborating with various other teams, the following group is in charge of reaction, the 3rd team is in charge of screening and assimilation, and the last team is responsible for maintenance. NOCS can carry out and support a number of tasks within a company. These tasks consist of the following: Operational responsibilities are not the only tasks that an IES does. It is additionally called for to develop and preserve interior policies and treatments, train employees, as well as implement best methods. Since functional obligations are thought by the majority of organizations today, it may be presumed that the IES is the solitary biggest business structure in the business. Nevertheless, there are numerous various other components that add to the success or failing of any type of organization. Since many of these various other aspects are often described as the “best practices,” this term has come to be a common description of what an IES in fact does. Thorough records are required to analyze risks versus a specific application or section. These records are usually sent out to a central system that keeps track of the risks against the systems and alerts administration groups. Alerts are generally gotten by drivers through email or sms message. Most businesses select email alert to permit rapid and also easy feedback times to these kinds of cases. Various other types of activities carried out by a security operations facility are carrying out risk assessment, locating threats to the facilities, and stopping the assaults. The risks evaluation calls for knowing what hazards the business is faced with every day, such as what applications are prone to attack, where, and also when. Operators can use hazard analyses to identify weak points in the security determines that companies use. These weak points may include lack of firewall programs, application security, weak password systems, or weak reporting treatments. In a similar way, network surveillance is an additional service used to a procedures center. Network surveillance sends out signals straight to the monitoring team to aid settle a network problem. It enables monitoring of crucial applications to make certain that the organization can remain to run successfully. The network efficiency tracking is made use of to evaluate and enhance the organization’s overall network performance. penetration testing A protection procedures center can spot breaches and also stop attacks with the help of alerting systems. This sort of modern technology aids to figure out the resource of breach and also block opponents before they can get to the information or data that they are attempting to obtain. It is likewise beneficial for identifying which IP address to obstruct in the network, which IP address must be obstructed, or which user is causing the rejection of gain access to. Network surveillance can determine harmful network activities as well as stop them prior to any kind of damage occurs to the network. Companies that rely upon their IT framework to rely on their capability to operate efficiently and also keep a high level of confidentiality and efficiency.
This article will help you to remove Scarab ransomware in full. Follow the ransomware removal instructions provided at the end of the article. Scarab is a virus that encrypts your files and demands money as a ransom to get your files restored. According to some malware researchers, all files of a compromised computer get locked with the AES military grade encryption algorithm. The Scarab cryptovirus will encrypt your data, while also appending the custom .[[email protected]].lock extension to each of the encrypted files. Keep on reading the article to see how you could try to potentially recover some of your files. |Name||.[[email protected]].lock Files Virus| |Short Description||The ransomware encrypts files on your computer system and demands a ransom to be paid to allegedly recover them.| |Symptoms||The ransomware will encrypt your files with the AES encryption algorithm. All locked files will have the .[[email protected]].lock extension appended to them.| |Distribution Method||Spam Emails, Email Attachments| |Detection Tool||See If Your System Has Been Affected by .[[email protected]].lock Files Virus| .[[email protected]].lock Files Virus (Scarab) – Distribution Scarab ransomware might spread its infection in various ways. A payload dropper which initiates the malicious script for this ransomware is being spread around the World Wide Web, and researchers have gotten their hands on a malware sample. If that file lands (Read more...) *** This is a Security Bloggers Network syndicated blog from How to, Technology and PC Security Forum authored by Tsetso Mihailov. Read the original post at: https://sensorstechforum.com/filesreturn247gmx-de-lock-files-virus-scarab-remove-restore-data/
In many organisations an automated scan of an application is done before it’s allowed to “go live”, especially if the app is external facing. There are typically two types of scan: - Static Scan - Dynamic Scan A static scan is commonly a source code scan. It will analyse code for many common failure modes. If you’re writing C code then it’ll flag on common buffer overflow patterns. If you’re writing Java with database connectors it’ll flag on common password exposure patterns. This is a first line of defense. If you can do this quickly enough then it might even be part of your code commit path; eg a git merge must pass static scanning before it can be accepted, and can be a triggered Static scan’s aren’t perfect, of course. Nothing is! It can miss some cases, and can false-positive on other cases (requiring work-arounds). So a static scan isn’t a replacement for a code review (which can also look at the code logic to see if it does the right thing), but is complementary. Static analysis was one reason for a massive increase in security of opensource projects by pro-actively flagging potential risks. A dynamic scan actually tests the component while it’s running. For a web site (the most common form of publically exposed application) it can automatically discover execution paths and fuzz input fields and try to present bad data. Dynamic scanning can only test the code paths it knows about (or discovers). Stuff hidden behind authentication must be special cased. And it could potentially caused bad behaviour to happen; it could trigger a little Bobby Tables event, so be very careful about scanning production systems! A common practice is to combine both types of scan. A static scan is typically quick and can be done frequently. A dynamic scan may take hours, and so may not be done every time. So a common workflow may be: - Developer codes, tests stuff out in a local instance - Developer commits Potential static scan here - “Pull request” - Code review - Code merge Static scan here So far work has been done purely in development; no production code has been pushed. The developer is allowed to develop and test their code with minimal interruption. It’s only when code is “ready” that scanning occurs. Now depending on your environment and production controls things may get more complication. Let’s assume a “merge” then triggers a “QA” or “UAT” build. - Merge fires off “Jenkins” process - Jenkins stands up a test environment - Static scan occurs - Merge successful only if the scan is successful. Depending on the size of complexity of the app this scan could take hours. One advantage to micro-services is that each app has a small footprint and so the scan should be a lot quicker. It’s important for a developer to not try and hide code paths. “Oh this scanning tool stops me from working, so I’ll hide stuff so it never sees it”… We all have the desire to do things quicker and know we’re smarter than “the system”, but going down this path will lead to bugs. Side bar: There’s the standard open-source comment; “with sufficient eyes, all bugs are shallow”. There’s a corollary to this; “with sufficient people using your program all bugs will be exploited”. Scanning this site and false positives Now this site is pretty simple. I use Hugo to generate static web pages. That means there’s no CGI to be exploited; no database connections to be abuse. It’s plain and simple. But does that mean a scan of the site would be clean? Is the Apache server configured correctly? Have I added some bad CGI to the static area of the site? Have I made another mistake somewhere? The Kali Linux distribution includes a tool called Nikto (no bonus points for knowing where the name comes from). This is a web scanner that can check for a few thousand known issues on a web server. It’s not a full host scan (doesn’t check for other open ports, for example). WARNING: Nikto is not a stealth tool; if you use this against someone’s site then they WILL know. Given how my site is built, I wouldn’t expect it to find any issues. + Server: Apache + Server leaks inodes via ETags, header found with file /, fields: 0x44af 0x533833f94c500 + Multiple index files found: /index.xml, /index.html + The Content-Encoding header is set to "deflate" this may mean that the server is vulnerable to the BREACH attack. + Allowed HTTP Methods: POST, OPTIONS, GET, HEAD + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-3268: /icons/: Directory indexing found. + OSVDB-3233: /icons/README: Apache default file found. + 8328 requests: 0 error(s) and 7 item(s) reported on remote host Interesting, it found 7 items to flag on. Except if we look closer, these are not necessarily issues. Is the “inode number” something we need to care about? I don’t think so. The file is deliberately there to allow search engines to find stuff. The /icons directory is the apache standard… is there a risk to having So even a “clean” site may have issues according to the scanning tool, even though it really doesn’t. (At least I assume it doesn’t! My evaluation may be wrong…) It takes a human to filter out these false positives. If you’re going to put this as part of your CI/CD pipeline then make sure you’ve filtered out the noise first! Static and Dynamic scanning tools are complementary. They both have strengths and weaknesses. Both should be used. Even if your code is never exposed to the internet it’s worth doing it, to help protect from the insider threat and also to mitigate against an attacker with a foothold from being able to exploit your server. They do not replace humans (e.g. code review processes) but can help protect against common failure modes. 2016/10/13 The ever interesting Wolf Goerlich, as part of his “Stuck in traffic” vlog, also points out another limitation of scanning tools; they don’t handle malicious insiders writing bad code (and, by extension, and intruder able to get into the source code repo and make updates). You still need humans in the loop (e.g. code review processes), whether it’s to test for code logic or malicious activities. You should watch Wolf’s vlog.
Networking applications with high memory access overhead gradually exploit network processors that feature multiple hardware multithreaded processor cores along with a versatile memory hierarchy. Given rich hardware resources, however, the performance depends on whether those resources are properly allocated. In this work, we develop an NIPS (Network Intrusion Prevention System) edge gateway over the Intel IXP2400 by characterizing/mapping the processing stages onto hardware components. The impact and strategy of resource allocation are also investigated through internal and external benchmarks. Important conclusions include: (1) the system throughput is influenced mostly by the total number of threads, namely I × J, where I and J represent the numbers of processors and threads per processor, respectively, as long as the processors are not fully utilized, (2) given an application, algorithm and hardware specification, an appropriate (I, J) for packet inspection can be derived and (3) the effectiveness of multiple memory banks for tackling the SRAM bottleneck is affected considerably by the algorithms adopted. - Network intrusion and detection system - Network processor - Resource allocation
provides you with a high-performance, yet lightweight and flexible rule-based network intrusion detection and prevention system that can also be used as a packet sniffer and logger. With its advanced capabilities and reliability, it is the most deployed IDS / IPS software, widely used in network monitoring applications. Combining database signatures with anomaly-based scanning, Snort is capable of detecting unwanted intrusions and features real-time analysis and alerts. In order to work properly, the application requires , a tool that provides direct packet access, allowing it to read raw network data. Having a Snort sensor up and running requires solid command line, network protocol functioning and IDS knowledge, thus beginner users might need to take their time to go through the documentation in order to learn how things work. The application can be used as a packet sniffer and logger, monitoring the network traffic in real-time, displaying the TCP/IP packet headers and recording the packets to a logging directory or a database ( Microsoft SQL Server , and ODBC are supported). However, the real power of Snort resides in its intrusion detection capabilities, since it can analyze network traffic and warn you about unusual events, vulnerabilities or exploits. The user customizable rules are similar to a firewall application and define the behavior of Snort in the IDS mode. You can set them up by editing the configuration file, which can also include application-specific rules (for SMTP e-mail connections, SSH and so on). The program analyzes the sent and received packets and determines whether any of them represent a possible threat. The packets that trigger rules can be logged in ASCII or binary format, the latter being recommended for keeping up with a fast LAN. Snort benefits from large community support with significant contribution to the rule database, which guarantees its reliability. Whether you use it for real-time traffic analysis and logging or as an IDS / IPS appliance, it is a powerful network security tool that professional users are surely to appreciate.
If you can locate README.txt on your Desktop and, on top of that, almost all your personal files have been locked, Thanatos Ransomware, a newly-detected ransomware infection, must have infiltrated your computer. This infection always tries to slither onto computers unnoticed, but the majority of users find out about its successful entrance soon because they notice that they can no longer access those files they need. Researchers working at 411-spyware.com say that this ransomware infection locks documents, pictures, music, and all other files the majority of users consider the most valuable. Free decryption software was not available at the time of writing. In addition, it is never a good idea to purchase decryption software from cyber criminals. Therefore, we cannot promise that you could unlock those encrypted files. In any event, the ransomware infection needs to be fully removed from the system. As has been observed, it deletes its executable file after encrypting data on victims’ computers, so the only component you will need to erase to delete this infection fully is its ransom note. We do not consider Thanatos Ransomware sophisticated malware because its working scheme is quite simple. Once it infiltrates computers, it scans the system to find out where users’ personal files are located and then encrypts all these files mercilessly. You could tell which of them have been locked by looking at your data – encrypted files will have the .THANATOS extension placed next to their original extensions, for example, file.exe.THANATOS. The ransom note README.txt tells users that they will lose all encrypted data if they do not pay 0.01 BTC to the provided BTC address. It should be noted that this ransom note will be opened to you automatically on system startup if you do not remove it fully because it creates a Value in the Run (HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run) registry key allowing it to open together with the Windows OS. Do not send your money to malicious software developers even if you can afford it because you might not get the decryption code from them. Users are told that they “will receive the decryption code from this mail [email protected],” but we cannot guarantee that you will get it. There are many users who do not get promised decryption tools from crooks, so our piece of advice for all computer users would be not to transfer a cent to cyber criminals no matter what kind of malicious application they encounter. Thanatos Ransomware is not one of those prevalent ransomware infections, so researchers still do not know much about the distribution of this malicious application. According to them, this threat should be spread via spam emails, but it must be only one of several distribution methods used to spread it. Malicious files launching ransomware are disguised as harmless-looking documents. Because of this, users open them and become the ones responsible for allowing malware to enter their computers. Users should carefully inspect new software before installing it on their systems as well because they might download malware from the web by mistake. Unfortunately, we cannot promise that this will be enough to prevent all harmful infections from entering the system because some threats are sneaker if compared to others and, because of this, we recommend taking more serious security measures if you want to live without malware. The installation of security software should be enough to avoid harmful threats, so install it right after you erase Thanatos Ransomware. According to researchers, since Thanatos Ransomware does not have many components and deletes its executable file after it performs its main activity, i.e. encrypting files on victims’ computers, it should not be very hard to erase this infection. Of course, it is not very likely that less experienced users could get rid of it manually without any guidance, so if you consider yourself one of them, you should scroll down and use instructions you find there. You will only need to delete the ransom note from your Desktop and eliminate the Value associated with it from the system registry in order to make sure it cannot be opened to you automatically. Alternatively, this nasty ransomware infection can be removed from the system with an antimalware scanner, but we want to emphasize that it could not unlock those encrypted files for you either.
A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL. The file type is A B S T R A C T : Nowadays hackers are able to find many software vulnerabilities, which can be exploited for malicious purposes such as to destroy the operating system, to steal users' private data, to demand a ransom not to affect the data and retain their validity. The majority of attacks use an Internet connection; therefore, the efforts should be directed to the way in which data packets are transmitted. The hardware-software complex, which is the main subject of the presented research,doi:10.11610/isij.4715 fatcat:onf4b5xftzdy7bs7hudvwyslv4
Dridex is a malware which uses Microsoft Word macros to infect a system and then creates a botnet to steal banking credentials and other sensitive personal information of the victims to gain access to the financial records of the victims. Dridex first appeared in 2014 and since then it has infected millions of computers. In 2015, financial theft caused by Dridex was around 20 million pounds in UK and around 20 million dollars in US. Dridex malware and its original version Cridex The original version of Dridex was known as Cridex and it first appeared in 2012. Cridex would act as a worm and self-replicate to infect other computers in the network using network drives or attached local storage devices. After infection, it would add the infected computer to a botnet and harvest sensitive banking credentials of the victims. The current version of Dridex first appeared in 2014. Like Cridex, Dridex also adds the infected computer to a botnet and steal sensitive credentials of the victims. But, unlike Cridex, Dridex does not self-replicate. It typically uses spam emails to infect a computer. The victim typically gets a spam email with a Microsoft Word document attachment. On clicking on the attachment, it uses macros to download and install the malware in the victim's computer. Dridex malware updated itself significantly in November, 2014. It started using Peer-to-Peer communication and decentralized its infrastructure, making it much harder to take down. How does Dridex malware infect a computer ? Dridex is spread through spam campaigns. Victims typically get spam emails with some Microsoft Word attachment in it. To make the spam emails look more authentic, the attackers often use real company names in the message body, subject line or sender address. They may even use the same top level domain name as that of the actual company. Most of the cases, these spam emails disguise as some sort of financial statements. The attached Microsoft Word document contains a malicious macro. When a victim clicks on it and opens the attachment, the macro starts execution. It drops a .vbs file, which in turn download and install Dridex in the victim's computer. So, to summarize, Dridex typically follows the steps mentioned below to infect a computer : - User receives a spam email with some Microsoft Word Attachment disguising mostly as a financial statement. - The user clicks on the attachment and it prompts to enable macro. - On enabling it, the macro starts execution and a malicious .vbs file is dropped. - The .vbs file downloads and installs Dridex malware. How does Dridex malware steal sensitive data of victims ? After infection, Dridex injects itself to popular web browsers and uses Man-In-The-Browser Attack to steal sensitive credentials of the victims. It typically follows the steps mentioned below for the purpose : - After infecting a computer, the malware installs a malicious extension to the victim's browser. When the user restarts the browser, it gets loaded automatically. - The extension registers a handler for every page load, which tracks all the pages loaded by the browser and matches them with a list of known websites. - Whenever the user loads a page of a banking website, the extension registers a button event handler. - The user authenticates to the banking website giving his credentials. When the user fills up a form for financial transaction, the extension intercepts the communication. It notes down the data entered by the user, but modifies the data and sends the modified data to the banking web application. - The web application performs the transaction as per the modified data and sends the receipt. - The extension again intercepts the communication. It modifies the data in the receipt with the data entered by the user originally. - The user gets the modified receipt filled up with data provided by him. - The stolen data is transferred back to the C&C server of the attackers. Who are the targeted victims of Dridex malware ? Dridex typically attacks customers of some selected banks and financial institutions. The main purpose of the attackers is to infect computers of those customers with the malware and then to modify or monitor financial transactions to steal sensitive credentials. How to prevent Dridex malware ? Dridex malware is one of the most widely known notorious malware which is difficult to detect. But, a user can always follow some simple steps to prevent infection of this malware. - The malware typically uses spam emails to infect a computer. Many a times, those spam emails are carelessly composed and contains contradictory information. A careful inspection of the email may prove to be much helpful in preventing infection of the malware. - The malware exploits security vulnerabilities of commonly used software to infect a computer. So, always keep your computer updated with recent security patches of all the commonly used software. - Update your Operating Systems with recent patches for the same reason. - Keep your browser updated with recent patches. It would reduce the security vulnerabilities present in the browser software. - Always keep your system updated with recent patches of anti-malware programs from a trusted source. - Closely monitor any changes in browser settings is one option of preventing this attack. Browser extensions and scripting should be limited. And, do not use any browser extension if you are not very sure about its authenticity. - Users should educate themselves about Dridex malware and its attacks and use their common sense while using sensitive banking web applications. - Users should change credentials of the banking application immediately on suspected infection of the malware. So, beware of various malware programs and how to prevent them, so that you can protect your data in a better way. And, stay safe, stay protected.
Request for Comments: 2123 The University of Auckland Traffic Flow Measurement: Experiences with NeTraMet Status of this Memo This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. This memo records experiences in implementing and using the Traffic Flow Measurement Architecture and Meter MIB. It discusses the implementation of NeTraMet (a traffic meter) and NeMaC (a combined manager and meter reader), considers the writing of meter rule sets and gives some guidance on setting up a traffic flow measurement system using NeTraMet. Table of Contents 1 Introduction 2 1.1 NeTraMet structure and development . . . . . . . . . . . . . . 3 1.2 Scope of this document . . . . . . . . . . . . . . . . . . . . 4 2 Implementation 4 2.1 Choice of meter platform . . . . . . . . . . . . . . . . . . . 4 2.2 Programming support requirements . . . . . . . . . . . . . . . 5 2.2.1 DOS environment . . . . . . . . . . . . . . . . . . . . . 6 2.2.2 Unix environment . . . . . . . . . . . . . . . . . . . . . 7 2.3 Implementing the meter . . . . . . . . . . . . . . . . . . . . 7 2.3.1 Data structures . . . . . . . . . . . . . . . . . . . . . 7 2.3.2 Packet matching . . . . . . . . . . . . . . . . . . . . . 8 2.3.3 Testing groups of rule addresses . . . . . . . . . . . . . 8 2.3.4 Compression of address masks . . . . . . . . . . . . . . . 9 2.3.5 Ignoring unwanted flow data . . . . . . . . . . . . . . . 10 2.3.6 Observing meter reader activity . . . . . . . . . . . . . 11 2.3.7 Meter memory management . . . . . . . . . . . . . . . . . 12 2.4 Data collection . . . . . . . . . . . . . . . . . . . . . . . 14 2.5 Restarting a meter . . . . . . . . . . . . . . . . . . . . . . 15 2.6 Performance . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 Writing rule sets 16 3.1 Rule set to observe all flows . . . . . . . . . . . . . . . . 17 3.2 Specifying flow direction, using computed attributes . . . . . 18 3.3 Subroutines . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 More complicated rule sets . . . . . . . . . . . . . . . . . . 23 4 Flow data files 26 4.1 Sample flow data file . . . . . . . . . . . . . . . . . . . . 27 4.2 Flow data file features . . . . . . . . . . . . . . . . . . . 28 4.3 Terminating and restarting meter reading . . . . . . . . . . . 29 5 Analysis applications 30 6 Using NeTraMet in a measurement system 31 6.1 Examples of NeTraMet in production use . . . . . . . . . . . . 31 7 Acknowledgments 33 8 References 33 9 Security Considerations 34 10 Author's Address 34 Early in 1992 my University needed to develop a system for recovering the costs of its Internet traffic. In March of that year I attended the Internet Accounting Working Group's session at the San Diego IETF, where I was delighted to find that the Group had produced a detailed architecture for measuring network traffic and were waiting for someone to try implementing it. During 1992 I produced a prototype measurement system, using balanced binary trees to store information about traffic flows. This work was reported at the Washington IETF in November 1992. The prototype performed well, but it made no attempt to recover memory from old flows, and the overheads in managing the balanced trees proved to be unacceptably high. I moved on to develop a production-quality system, this time using hash tables to index the flow information. This version was called NeTraMet (the Network Traffic Meter), and was released as free software in October 1993. Since then I have continued working on NeTraMet, producing new releases two or three times each year. NeTraMet is now in production use at many sites around the world. It is difficult to estimate the number of sites, but there is an active NeTraMet mailing list, which had about 130 subscribers in March 1996. Early in 1996 the Realtime Traffic Flow Measurement Working Group (RTFM) was chartered to move the Traffic Flow Measurement Architecture on to the IETF standards track. This document records traffic flow measurement experience gained through three years experience with NeTraMet. 1.1 NeTraMet structure and development The Traffic Flow Architecture document describes four components: - METERS, which are attached to the network at the points where it is desired to measure the traffic, - METER READERS, which read data from meters and store it for later use, - MANAGERS, which configure meters and control meter readers, and - ANALYSIS APPLICATIONS, which process the data from meter readers so as to produce whatever reports are required. NeTraMet is a computer program which implements the Traffic Meter, stores the measured flow data in memory, and provides an SNMP agent so as to make it available to Meter Readers. The NeTraMet distribution files include NeMaC, which is a combined Manager and Meter Reader capable of managing an arbitrary number of meters, each of which may be using its own rule set, and having its flow data collected at its own specified intervals. The NeTraMet distribution also includes several rudimentary Analysis Applications, allowing users to produce simple plots from NeMaC's flow data files (fd_filter and fd_extract) and to monitor - in real time - the flows at a remote meter (nm_rc and nifty). Since the first release the Traffic Meter MIB has been both improved and simplified. Significant changes have included better ways to specify traffic flows (i.e. more actions and better control structures for the Packet Matching Engine), and computed attributes (class and kind). These changes have been prompted by operational requirements at sites using NeTraMet, and have been tested extensively in successive versions of NeTraMet. NeTraMet is widely used to collect usage data for Internet Service Providers. This is especially so in Australia and New Zealand, but there are also active users at sites around the world, for example in Canada, France, Germany and Poland. NeTraMet is very useful as a tool for understanding exactly where traffic is flowing in large networks. Since the Traffic Meters perform considerable data reduction (as specified by their rule sets) they significantly reduce the volume of data to be read by Meter Readers. This characteristic makes NeTraMet particularly effective for networks with many remote sites. An example of this (the Kawaihiko network) is briefly described below. As well as providing data for post-observation analysis, NeTraMet can be used for real-time network monitoring and trouble-shooting. The NeTraMet distribution includes 'nifty,' an X/Motif application which monitors traffic flows and attempts to highlight those which are 'interesting.' 1.2 Scope of this document This document presents the experience gained from three years work with the Traffic Flow Measurement Architecture. Its contents are grouped as follows - Implementation issues for NeTraMet and NeMaC, - How rule files work, and how to write them for particular purposes, and - How to use NeTraMet and NeMaC for short-term and long-term flow measurement. 2.1 Choice of meter platform As pointed out in the Architecture document , the goal of the Realtime Traffic Flow Measurement Working Group is to develop a standard for the Traffic Meter, with the goal of seeing it implemented in network devices such as hubs, switches and routers. Until the Architecture is well enough developed to allow this, it has sufficed to implement the meter as a program running on a general- purpose computer system. The choice of computer system for NeTraMet was driven by the need to choose one which would be widely available within the Internet community. One strong possibility was a Unix system, since these are commonly used for a variety of network support and management tasks. For the initial implementation, however, Unix would have had some disadvantages: - The wide variety of different Unix systems can increase the difficulties of software support. - The cost of a Unix system as a meter is too high to allow users to run meters simultaneously at many points within their networks. Another factor in choosing the platform was system performance. When I first started implementing NeTraMet it was impossible to predict how much processing workload was needed for a viable meter. Similarly, I had no idea how much memory would be required for code or data. I therefore chose to implement NeTraMet on a DOS PC. This was because: - It is a minimum system in all respects. If the meter works well on such a system, it can be implemented on almost any hardware (including routers, switches, etc.) - It is an inexpensive system. Sites can easily afford to have many meters around their networks. - It is a simple system, and one which I had complete control over. This allowed me to implement effective instrumentation to monitor the meter's performance, and to include a wide variety of performance optimisations in the code. Once the meter was running I needed a manager to download rule files to it. Since a single manager and meter reader can effectively support a large number of meters, a Unix environment for NeMaC was a natural choice. There are fewer software support problems for NeMaC than for NeTraMet since NeMaC has minimal support needs - it only needs to open a UDP socket to the SNMP port on each controlled meter. Early NeTraMet distributions contained only the PC meter and Unix manager. In later releases I ported NeTraMet (the meter) to Unix, and extended the control features of NeMaC (the combined manager and meter reader). I have also experimented with porting NeMaC to the DOS system. This is not difficult, but doesn't seem to be worth pursuing. The current version of NeTraMet is a production-quality traffic measurement system which has been in continuous use at the University of Auckland for nearly two years. 2.2 Programming support requirements To implement the Traffic Flow Meter I needed a programming environment providing good support for the following: - observation of packet headers on the network; - system timer with better than 10 ms resolution; - IP (Internet Protocol), for communications with manager and meter reader; - SNMP, for the agent implementing the Meter MIB. 2.2.1 DOS environment For the PC I chose to use Ethernet as the physical network medium. This is simply an initial choice, being the medium used within the University of Auckland's data network. Interfaces for other media could easily be added as they are needed. In the PC environment a variety of 'generalised' network interfaces are available. I considered those available from companies such as Novell, DEC and Microsoft and decided against them, partly because they are proprietary, and partly because they did not appear to be particularly easy to use. Instead I chose the CRYNWR Packet Drivers . These are available for a wide variety of interface cards and are simple and clearly documented. They support Ethernet's promiscuous mode, allowing one to observe headers for every passing packet in a straightforward manner. One disadvantage of the Packet Drivers is that it is harder to use them with newer user shells (such as Microsoft Windows), but this was irrelevant since I intended to run the meter as the only program on a dedicated machine. Timing on the PC presented a challenge since the BIOS timer routines only provide a clock tick about 18 times each second, which limits the available time resolution. Initially I made do with a timing resolution of one second for packets, since I believed that most flows existed for many seconds. In recent years it has become apparent that many flows have lifetimes well under a second. To measure them properly with the Traffic Flow Meter one needs times resolved to 10 millisecond intervals, this being the size of TimeTicks, the most common time unit within SNMP . Since all the details of the original PC are readily available , it was not difficult to understand the underlying hardware. I have written PC timer routines for NeTraMet which read the hardware timer with 256 times the resolution of the DOS clock ticks, i.e. about 5 ticks per millisecond. There are many TCP/IP implementations available for DOS, but most of them are commercial software. Instead I chose Waterloo TCP , since this was available (including full source code) as public domain software. This was necessary since I needed to modify it to allow me to save incoming packet headers at the same time as forwarding packets destined for the meter to the IP handler routines. For SNMP I chose CMU SNMP , since again this was available (with full source code) as public domain software. This made it fairly simple to port it from Unix to the PC. Finally, for the NeTraMet development I used Borland's Turbo C and Turbo Assembler. Although many newer C programming environments are now available, I have been perfectly happy with Turbo C version 2 for the NeTraMet project! 2.2.2 Unix environment In implementing the Unix meter, the one obvious problem was 'how do I get access to packet headers?' Early versions of the Unix meter were implemented using various system-specific interfaces on a SunOS 4.2 system. Later versions use libpcap , which provides a portable method of obtaining access to packet headers on a wide range of Unix systems. I have verified that this works very well for ethernet interfaces on Solaris, SunOS, Irix, DEC Unix and Linux, and for FDDI interfaces on Solaris. libpcap provides timestamps for each packet header with resolution determined by the system clock, which is certainly better than 10 ms! All Unix systems provide TCP/IP capabilities, so that was not an issue. For SNMP I used CMU SNMP, exactly as on the PC. 2.3 Implementing the meter This section briefly discusses the data structures used by the meter, and the packet matching process. One very strong concern during the evolution of NeTraMet has been the need for the highest possible level of meter performance. A variety of interesting optimisations have been developed to achieve this; as discussed below. Another particular concern was the need for efficient and effective memory managent; this is discussed in detail below. 2.3.1 Data structures All the programs in NeTraMet, NeMaC and their supporting utility programs are written in C, partly because C and its run-time libraries provides good access to the underlying hardware, and partly because I have found it to be a highly portable language. The data for each flow is stored in a C structure. The structure includes all the flow's attribute values (including packet and byte counts), together with a link field which can be used to link flows into lists. NeTraMet assumes that Adjacent addresses are 802 MAC Addresses, which are all six bytes long. Similarly, Transport addresses are assumes to be two bytes long, which is the case for port numbers in IP. Peer addresses are normally four bytes or less in length. They may, however, be as long as 20 bytes (for CLNS). I have chosen to use a fixed Peer address size, defined at compile time, so as to avoid the complexity of having variable-sized flow structures. The flow table itself is an array of pointers to flow data structures, which allows indexed access to flows via their flow numbers. There is also a single large hash table, referred to in the Architecture document as the flow table's 'search index'. Each hash value in the table points to a circular chain of flows. To find a flow one computes its hash value then searches that value's flow chain. The meter stores each rule in a C structure. All the rule components have fixed sizes, but address fields must be wide enough to hold any type of address - Adjacent, Peer or Transport. The rule address width is defined at compile time, in the same way as flow Peer addresses. Each rule set is implemented as an array of pointers to rule data structures, and the rule table is an array of pointers to the rule sets. The size of each rule set is specified by NeMaC (before it begins downloading the rule set), but the maximum number of rule sets is defined at compile time. 2.3.2 Packet matching Packet matching is carried out in NeTraMet exactly as described in the Architecture document . Each incoming packet header is analysed so as to determine its attribute values. These values are stored in a structure which is passed to the Packet Matching Engine. To facilitate matching with source and destination reversed this structure contains two substructures, one containing the source Adjacent, Peer and Transport address values, the other containing the destination address values. 2.3.3 Testing groups of rule addresses As described in the Architecture each rule's address will usually be tested, i.e. ANDed with the rule's mask and compared with the rule's value. If the comparison fails, the next rule in sequence is executed. This allows one to write rule sets which use a group of rules to test an incoming packet to see whether one of its addresses - e.g. its SourcePeerAddress - is one of a set of specified IP addresses. Such groups of related rules can grow quite large, containing hundreds of rules. It was clear that sequential execution of such groups of rules would be slow, and that something better was essential. The optimisation implemented in NeTraMet is to find groups of rules which test the same attribute with the same mask, and convert them into a single hashed search of their values. The overhead of setting up hash tables (one for each group) is incurred once, just before the meter starts running a new rule set. When a 'group' test is to be performed, the meter ANDs the incoming attribute value, computes a hash value from it, and uses this to search the group's hash table. Early tests showed that the rule hash chains were usually very short, usually having only one or two members. The effect is to reduce large sequences of tests to a hash computation and lookup, with a very small number of compares; in short this is an essential optimisation for any traffic meter! There is, of course, overhead associated with performing the hashed compare. NeTraMet handles this by having a minimum group size defined at compile time. If the group is too small it is not combined into a hashed group. In early versions of NeTraMet I did not allow Gotos into a hashed group of rules, which proved to be an unnecessarily conservative position. NeTraMet stores each group's hash table in a separate memory area, and keeps a pointer to the hash table in the first rule of the group. (The rules data structure has an extra component to hold this hash table pointer). Rules within the group don't have hash table pointers; when they are executed as the target of a Goto rule they behave as ordinary rules, i.e. their tests are performed normally. 2.3.4 Compression of address masks When the Packet Matching Engine has decided that an incoming packet belongs to a flow which is to be measured, it searches the flow table to determine whether or not the flow is already present. It does this by computing a hash from the packet and using it for access to the flow table's search index. When designing a hash table, one normally assumes that the objects in the table have a constant size. For NeTraMet's flow table this would mean that each flow would contain a value for every attribute. This, however, is not the case, since only those attribute values 'pushed' by rules during packet matching are stored for a flow. To demonstrate this problem , let us assume that every flow in the table contains a value for only one attribute, SourcePeerAddress, and that the rule set decides whether flows belong to a specified list of IP networks, in which case only their network numbers are pushed. The rules perform this test using a variety of masks, since the network number allocations range from 16 to 24 bits in width. In searching the flow table, the meter must distinguish between zeroes in the address and 'don't care' bits which had been ANDed out. To achieve this it must store SourcePeerMask values in the flow table as well as the ANDed SourcePeerAddress values. In early versions of NeTraMet this problem was side-stepped by using multiple hash tables and relying on the user to write rules which used the same set of attributes and masks for all the flows in each table. This was effective, but clumsy and difficult to explain. Later versions changed to using a single hash table, and storing the mask values for all the address attributes in each flow. The current version of the meter stores the address masks in compressed form. After examining a large number of rule sets I realised that although a rule set may have many rules, it usually has a very small number of address masks. It is a simple matter to build a table of address masks, and store an index to this 'mask table' instead of a complete mask. NeTraMet's maximum number of masks is defined at compile time, up to a maximum of 256. This allows me to use a single byte for each mask in the flow data structure, significantly reducing the structure's size. As well as this size reduction, two masks can be compared by comparing their indices in the mask table, i.e. it reduces to a single-byte comparison. Overall, using a mask table seems to provide useful improvements in storage efficiency and execution speed. 2.3.5 Ignoring unwanted flow data As described in the Architecture document , every incoming packet is tested against the current rule set by the Packet Matching Engine. This section explains my efforts to improve NeTraMet performance on the PC by reducing the amount of processing required by each incoming packet. On the PC each incoming packet causes an interrupt, which NeTraMet must process so as to collect information about the packet. In early versions I used a ring buffer with 512 slots for packet headers, and simply copied each packet's first 64 bytes into the next free slot. The packet headers were later taken from the buffer, attribute values were extracted from them, and the resulting 'incoming attribute values' records were passed to the Packet Matching Engine. I modified the interrupt handling code to extract the attribute values and store them in a 'buffer slot.' This reduced the amount of storage required in each slot, allowing more space for storing flows. It did increase slightly the amount of processing done for each packet interrupt, but this has not caused any problems. In later versions I realised that if one is only interested in measuring IP packets, there is no point in storing (and later processing) Novell or EtherTalk packets! It is a simple matter for the meter to inspect a rule set and determine which Peer types are of interest. If there are PushRule rules which test SourcePeerType (or DestPeerType), they specify which types are of interest. If there are no such rules, every packet type is of interest. The PC NeTraMet has a set of Boolean variables, one for each protocol it can handle. The values of these 'protocol' variables are determined when the meter begins running a new rule set. For each incoming packet, the interrupt handler determines the Peer type. If the protocol is not of interest, no further processing is done - the packet is simply ignored. In a similar manner, if Adjacent addresses are never tested there is no point in copying them into the packet buffer slot. The overall effect of these optimisations is most noticeable for rule files which measure IP flows on a network segment which also carries a lot of traffic for other network protocols; this situation is common on multiprotocol Local Area networks. On the Unix version of NeTraMet the Operating System does all the packet interrupt processing, and libpcap delivers packet headers directly to NeTraMet. The 'protocol' and 'adjacent address' optimisations are still performed, at the point when NeTraMet receives the packet headers from libpcap. 2.3.6 Observing meter reader activity The Architecture document explains that a flow data record must be held in the meter until its data has been read by a meter reader. A meter must therefore have a reliable way of deciding when flow data has been read. The problem is complicated by the fact that there may be more than one meter reader, and that meter readers collect their data asynchronously. Early versions of NeTraMet solved this problem by having a single MIB variable which a meter reader could set to indicate that it was beginning a data collection. In response to such an SNMP SET request, NeTraMet would update its 'collectors' table. This had an entry for each meter reader, and variables recording the start time for the last two collections. The most recent collection might still be in progress, but its start time provides a safe estimate of the time when the one before it actually finished. Space used for flows which have been idle since the penultimate collection started can be recovered by the meter's garbage collector, as described below. The Meter MIB specifies a more general table of meter reader information. A meter reader wishing to collect data from a meter must inform the meter of its intention by creating a row in the table, then setting a LastTime variable in that row to indicate the start of a collection. The meter handles such a SET request exactly as described above. If there are multiple meter readers the meter can easily find the earliest time any of them started its penultimate collection, and may recover flows idle since then. Should a meter reader fail, NeTraMet will eventually time out its entry in the meter reader info table, and delete it. This avoids a situation where the meter can't recover flows until they have been collected by several meter readers, one of which has failed. 2.3.7 Meter memory management In principle, the size of the flow table (i.e. the maximum number of flows) could be changed dynamically. This would involve allocating space for the flow table's new pointer array and copying the old pointers into it. NeTraMet does not implement this. Instead the maximum number of flows is set from the command line when it starts execution. If no maximum is specified, a compile-time default number is used. Memory for flow data structures (i.e. 'flows') is allocated dynamically. NeTraMet requests the C run-time system for blocks of several hundred flows, and links them into a free list. When a new flow is needed NeTraMet gets memory space from the free list, then searches the flow table's pointer array for an unused flow pointer. In practice a 'last-allocated' index is used to point to the flow table, so a simple linear search suffices. The flow index is saved in the flow's data record, and its other attribute values are set to zero. To release a flow data record it must first be removed from any hash list it is part of - this is straightforward since those lists are circular. The flow's entry in the flow table pointer array is then set to zero (NULL pointer), and its space is returned to the free list. Once a flow data record is created it could continue to exist indefinitely. In time, however, the meter would run out of space. To deal with this problem NeTraMet uses an incremental garbage collector to reclaim memory. At regular intervals specified by a 'GarbageCollectInterval' variable the garbage collector procedure is invoked. This searches through the flow table looking for flows which might be recovered. To control the resources consumed by garbage collection there are limits on the number of in-use and idle flows which the garbage collector may inspect these are set either when NeTraMet is started (as options on the command line) or dynamically by NeMaC (using variables in an Enterprise MIB for NeTraMet) To decide whether a flow can be recovered, the garbage collector considers how long it has been idle (no packets in either direction), and when its data was last collected. If it has been collected by all known meter readers since its LastTime, its memory may be recovered. This alogrithm is implemented using a variable called 'GarbageCollectTime,' which normally contains the meter's UpTime when the penultimate collection (i.e. the one before last) was started. See the section on observing meter reader activity (above) for more details. Should flows not be collected often enough the meter could run out of space. NeTraMet attempts to prevent this by having a low-priority background process check the percentage of flows active and compare it with the HighWaterMark MIB variable. If the percentage of active flows is greater than the high-water mark, 'GarbageCollectTime' is incremented by the current value of the InactivityTimeout MIB variable. The Meter MIB specifies that a meter should switch to using a 'standby' rule set if the percentage of active flows rises above HighWaterMark. In using NeTraMet to measure traffic flows to and from the University of Auckland it has not been difficult to create standby rules which are very similar to the 'production' rule file, differing only in that they push much less information about flows. This has, on several occasions, allowed the meter to continue running for one or two days after the meter reader failed. When the meter reader restarted, it was able to collect all the accumulated flow data! The MIB also specifies that the meter should take some action when the active flow percentage rises above its FloodMark value. If this were not done, the meter could spend a rapidly increasing proportion of its time garbage collecting, to the point where its ability to respond to requests from its manager would be compromised. NeTraMet switches to the default rule set when its FloodMark is reached. A potentially large number of new flows may be created when the meter switches to a standby rule set. It is important to set a HighWaterMark so as to allow enough flow table space for this. In practice, a HighWaterMark of 65% and a FloodMark of 95% seem to work well. 2.4 Data collection As explained above, a meter reader wishing to collect flows begins each collection by setting the LastTime variable in its ReaderInfoTable row, then works its way through the flow table collecting data. A number of algorithms can be used to examine the flow table; these are presented below. The simplest approach is a linear scan of the table, reading the LastTime variable for each row. If the read fails the row is inactive. If it succeeds, it is of interest if its LastTime value is greater than the time of the last collection. Although this method is simple it is also rather slow, requiring an SNMP GET request for every possible flow; this renders it impractical. Early versions of NeTraMet used two 'windows' into the flow table to find flows which were of interest. Both windows were SNMP tables, indexed by a variable which specified a time. A succession of GETNEXT requests on one of these windows allowed NeMaC (the meter reader) to find the flow indices for all flows which had been active since the specified time. The two windows were the ActivityTime window (which located active flows), and the CreateTime window (which located new flows). Knowing the index of an active flow, the meter reader can GET the values for all the attributes of interest. NeMaC allows the user to specify which these are, rather than simply read all the attributes. Having the two windows allowed NeMaC to read attributes which remain constant - such as the flow's address attributes - when the flow is created, but to only read attributes which change with time - such as its packet and byte counts - during later collections. Experience has shown, however, that many flows have rather short lifetimes; one effect of this is that the improved efficiency of using two windows does not result in any worthwhile improvement in collection performance. The current version of the Meter MIB uses a TimeFilter variable in the flow table entries. This can be used with GETNEXT requests to find all flows which have been active since a specified time directly, without requiring the extra 'window' SNMP variables. It can be combined with SNMPv2's GETBULK request to further reduce the number of SNMP packets needed for each collection; I have yet to implement this in NeTraMet. A disadvantage of using SNMP to collect data from the meter is that SNMP packets impose a high overhead. For example, if we wish to read an Integer32 variable (four bytes of data), it will be returned with its object identifier, type and length, i.e. at least ten bytes of superfluous data. One way to reduce this overhead is to use an Opaque object to return a collection of data. NeTraMet uses this approach to retrieve 'column activity data' from the meter, as follows. Each packet of column activity data contains data values for a specified attribute, and each value is preceded by its flow number. The flow table can be regarded as a two-dimensional array, with a column for each flow attribute. Column activity data objects allow the meter reader to read columns of the flow table, so as to collect only those attributes specified by the user. The actual implementation is complicated by the fact that since the flow table is read column by column, rows can become active after the first column has been read. NeMaC reads the widest columns (those with greatest size in bytes, e.g. PeerAddress) first, and ignores any rows which appear in later columns. Newly active rows will, of course, be read in the next collection. Using Opaque objects in this way dramatically reduces the number of SNMP packets required to read a meter. This has proved worthwhile in situations where the number of flows is large (for example on busy LANs), and where the meter(s) are physically dispersed over slow WAN links. It has the disadvantage that general-purpose MIB browsers cannot understand the column activity variables, but this seems a small price to pay for the improved data collection performance. 2.5 Restarting a meter If a meter fails, for example because of a power failure, it will restart and begin running rule set 1, the default rule set which is built into the meter. Its manager must recognise that this has happened, and respond with some suitable action. NeMaC allows the user to specify a 'keepalive' interval. After every such interval NeMaC reads the meter's sysUptime and compares it with the last sysUptime. If the new sysUptime is less than the last one, NeMaC decides that the meter has restarted. It downloads the meter's backup rule set and production rule set, then requests the meter to start running the production rule set. In normal use we use a keepalive interval of five minutes and a collection interval of 15 minutes. If a meter restarts, we lose up to five minutes data before the rules sets are downloaded. Having the meter run the default rule set on startup is part of the Traffic Flow Measurement Architecture , in keeping with the notion that meters are very simple devices which do not have disk storage. Since disks are now very cheap, it may be worth considering whether the architecture should allow a meter to save its configuration (including rule sets) on disk. The PC version of the meter, NeTraMet, continually measures how much processor time is being used. Whenever there is no incoming packet data to process, 'dummy' packets are generated and placed in the input buffer. These packets are processed normally by the Packet Matching Engine; they have a PeerType of 'dummy.' The numbers of dummy and normal packets are counted by the meter; their ratio is used as an estimate of the processor time which is 'idle,' i.e. not being used to process incoming packets. The Unix version is intended to run as a process in a multiprocessing system, so it cannot busy- wait in this way. The meter also collects several other performance measures; these can be displayed on the meter console in response to keyboard requests. The PC meter can be used with a 10 MHz 286 machine, on which it can handle a steady load of about 750 packets per second. On a 25 MHz 386SX it will handle about 1250 packets per second. Users have reported that a 40 MHz 486 can handle peaks of about 3,000 packets per second without packet loss. The Unix meter has been tested metering traffic on a (lightly loaded) FDDI interface; it uses about one percent of the processor time on a SPARC 10 system running Solaris. 3 Writing rule sets The Traffic Meter provides a versatile device for measuring a user- specified set of traffic flows, and performing useful data reduction on them. This data reduction capability not only minimises the volume of data to be collected by meter readers, but also simplifies the later processing of traffic flow data. The flows of interest, and the processing to be performed, are specified in a 'rule set' which is downloaded to the meter (NeTraMet) by the manager (NeMaC). This section explains what is involved in writing rule sets. NeTraMet is limited to metering packets observed on a network segment. This means that for all the observed flows, Source and Dest Type attributes (e.g. SourcePeerType and DestPeerType) have the same value. The NeTraMet implementation uses single variables in its flow data structure for AdjacentType, SourceType and TransType. Nonetheless, the rule sets discussed below push values for both Source and Dest Type attributes; this make sure that packet matching works properly with the directions reversed, even for a meter which allows Source and Dest Type values to be different. 3.1 Rule set to observe all flows NeMaC reads rule sets from text files which contain the rules, the set number which the meter (and meter reader) will identify them by, and a 'format,' i.e. a list specifying which attributes the meter reader should collect and write to the flow data file. The # character indicates the start of a comment; NeMaC ignores the rest of the line. SET 2 # RULES # SourcePeerType & 255 = Dummy: Ignore, 0; Null & 0 = 0: GotoAct, Next; # SourcePeerType & 255 = 0: PushPkttoAct, Next; DestPeerType & 255 = 0: PushPkttoAct, Next; SourcePeerAddress & 255.255.255.255 = 0: PushPkttoAct, Next; DestPeerAddress & 255.255.255.255 = 0: PushPkttoAct, Next; SourceTransType & 255 = 0: PushPkttoAct, Next; DestTransType & 255 = 0: PushPkttoAct, Next; SourceTransAddress & 255.255 = 0: PushPkttoAct, Next; DestTransAddress & 255.255 = 0: CountPkt, 0; # FORMAT FlowRuleSet FlowIndex FirstTime " " SourcePeerType SourcePeerAddress DestPeerAddress " " SourceTransType SourceTransAddress DestTransAddress " " ToPDUs FromPDUs " " ToOctets FromOctets; The first rule tests the incoming packet's SourcePeerType to see whether it is 'dummy.' If it is, the packet is ignored, otherwise the next rule is executed. The second rule tests the Null attribute. Such a test always succeeds, so the rule simply jumps to the action of the next rule. (The keyword 'next' is converted by NeMaC into the number of the following rule.) The third rule pushes the packet's SourcePeerType value, then jumps to the action of the next rule. The user does not know in advance what the value of PushPkt rules will be, which is why the value appearing in them is always zero. The user must take care not to write rule sets which try to perform the test in a PushPkt rule. This is a very common error in a rule set, so NeMaC tests for it and displays an error message. The following rules push a series of attribute values from the packet, and the last rule also Counts the packet, i.e. it tells the Packet Matching Engine (PME) that the packet has been successfully matched. The PME responds by searching the flow table to see whether the flow is already current (i.e. in the table), creating a new flow data record for it should this be necessary, and incrementing its packet and byte counters. Overall this rule set simply classifies the packet (i.e. decides whether or not it is to be counted), then pushes all the Peer and Transport attribute values for it. It makes no attempt to specify a direction for the flow - this is left to the PME, as described in . The resulting flow data file will have each flow's source and destination addresses in the order of the first packet the meter observed for the flow. 3.2 Specifying flow direction, using computed attributes As indicated above, the Packet Matching Engine will reliably determine the flow, and the direction within that flow, for every packet seen by a meter. If the rule set does not specify a direction for the flow, the PME simply assumes that the first packet observed for a flow is travelling forward, i.e. from source to destination. In later analysis of the flow data, however, one is usually interested in traffic to or from a particular source. One can achieve this in a simple manner by writing a rule set to specify the source for flows. All that is required is to have rules which succeed if the packet is travelling in the required direction, and which execute a 'Fail' action otherwise. This is demonstrated in the following two examples. (Note that early versions of NeMaC allowed 'Retry' as a synonym for 'Fail.' The current version also allows 'NoMatch,' which seems a better way to imply "fail, allowing PME to try a second match with directions reversed.") # Count IP packets from network 188.8.131.52 # SourcePeerType & 255 = IP: Pushto, ip_pkt; Null & 0 = 0: Ignore, 0; # ip_pkt: SourcePeerAddress & 255.255.0.0 = 184.108.40.206: Goto c_pkt; Null & 0 = 0: NoMatch, 0; # c_pkt: SourcePeerAddress & 255.255.255.255 = 0: PushPkttoAct, Next; DestPeerAddress & 255.255.255.255 = 0: CountPkt, 0; The rule labelled ip_pkt tests whether the packet came from network 130.216. If it did not, the test fails and the following rule executes a NoMatch action, causing the PME to retry the match with the directions reversed. If the second match fails the packet did not have 130.216 as an end-point, and is ignored. The next rule set meters IP traffic on a network segment which connects two routers, g1 and g2. It classifies flows into three groups - those travelling from g1 to g2, those whose source is g1 and those whose source is g2. # Count IP packets between two gateways # # -------+-------------------+------------------+------- # | | | # +----+-----+ +----+-----+ +---+---+ # | g1 | | g2 | | meter | # +-+-+-+-+--+ +-+-+-+-+--+ +-------+ # SourcePeerType & 255 = IP: Pushto, ip_pkt; Null & 0 = 0: Ignore, 0; # ip_pkt: SourceAdjacentAddress & FF-FF-FF-FF-FF-FF = 00-80-48-81-0E-7C: Goto, s1; Null & 0 = 0: Goto, s2; s1: DestAdjacentAddress & FF-FF-FF-FF-FF-FF = 02-07-01-04-ED-4A GotoAct, g3; Null & 0 = 0: GotoAct, g1; s2: SourceAdjacentAddress & FF-FF-FF-FF-FF-FF = 02-07-01-04-ED-4A: Goto, s3; Null & 0 = 0: NoMatch, 0; s3: DestAdjacentAddress & FF-FF-FF-FF-FF-FF = 00-80-48-81-0E-7C: NoMatch, 0; Null & 0 = 0: GotoAct, g2; # g1: FlowClass & 255 = 1: PushtoAct, c_pkt; # From g1 g2: FlowClass & 255 = 2: PushtoAct, c_pkt; # From g2 g3: FlowClass & 255 = 3: PushtoAct, c_pkt; # g1 to g2 # c_pkt: SourceAdjacentAddress & FF-FF-FF-FF-FF-FF = 0: PushPkttoAct, Next; DestAdjacentAddress & FF-FF-FF-FF-FF-FF = 0: PushPkttoAct, Next; SourcePeerAddress & 255.255.255.255 = 0: PushPkttoAct, Next; DestPeerAddress & 255.255.255.255 = 0: PushPkttoAct, Next; Null & 0 = 0: Count, 0 The first two rules ignore non-IP packets. The next two rules Goto s1 if the packet's source was g1, or to s2 otherwise. The rule labelled s2 tests whether the packet's source was g2; if not a NoMatch action is executed, allowing the PME to try the match with the packet's direction reversed. If the match fails on the second try the packet didn't come from (or go to) g1 or g2, and is ignored. Packets which come from g1 are tested by the rule labelled s1, and the PME will Goto either g3 or g1. Packets which came from g2 are tested by the rule labelled s3. If they are not going to g1 the PME will Goto g2. If they are going to g1 a NoMatch action is executed - we want them counted as backward- travelling packets for the g1-g2 flow. The rules at g1, g2 and g3 push the value 1, 2 or 3 from their rule into the flow's FlowClass attribute. This value can be used by an Analysis Application to separate the flows into the three groups of interest. FlowClass is an example of a 'computed' attribute, i.e. one whose value is Pushed by the PME during rule matching. The remaining rules Push the values of other attributes required for later analysis, then Count the flow. Subroutines are implemented in the PME in much the same way as in BASIC. A subroutine body is just a sequence of statements, supported by the GoSub and Return actions. 'GoSub' saves the PME's running environment and jumps to the first rule of the subroutine body. Subroutine calls may be nested as required - NeTraMet defines the maximum nesting at compile time. 'Return n' restores the environment and jumps to the action part of the nth rule after the Gosub, where n is the index value from the Return rule. The Return action provides a way of influencing the flow of control in a rule set, rather like a FORTRAN Computed Goto. This is one way in which a subroutine can return a result. The other way is by Pushing a value in either a computed attribute (as demonstrated in the preceding section), or in a flow attribute. One common use for a subroutine is to test whether a packet attribute matches one of a set of values. Such a subroutine becomes much more useful if it can be used to test one of several attributes. The PME architecture provides for this by using 'meter variables' to hold the names of the attributes to be tested. The meter variables are called V1, V2, V3, V4 and V5, and the Assign action is provided to set their values. If, for example, we need a subroutine to test either SourcePeerAddress or DestPeerAddress, we write its rules to test V1 instead. Before calling the subroutine we Assign SourcePeerAddress to V1; later tests of V1 are converted by the PME into tests on SourcePeerAddress. Note that since meter variables may be reassigned in a subroutine, their values are part of the environment which must be saved by a Gosub action. The following rule set demonstrates the use of a subroutine .. # Rule specification file to tally IP packets in three groups: # UA to AIT, UA to elsewhere, AIT to elsewhere # # -------+-------------------+-----------------+-------- # | | | # +----+-----+ +----+-----+ +---+---+ # | UA | | AIT | | meter | # +-+-+-+-+--+ +-+-+-+-+--+ +-------+ # SourcePeerType & 255 = IP: PushtoAct, ip_pkt; Null & 0 = 0: Ignore, 0; # ip_pkt: v1 & 0 = SourcePeerAddress: AssignAct, Next; Null & 0 = 0: Gosub, classify; Null & 0 = 0: GotoAct, from_ua; # 1 ua Null & 0 = 0: GotoAct, from_ait; # 2 ait Null & 0 = 0: NoMatch, 0; # 3 other # from_ua: v1 & 0 = DestPeerAddress: AssignAct, Next; Null & 0 = 0: Gosub, classify; Null & 0 = 0: Ignore, 0; # 1 ua-ua Null & 0 = 0: GotoAct, ok_pkt; # 2 ua-ait Null & 0 = 0: Gotoact, ok_pkt; # 3 ua-other # from_ait: v1 & 0 = DestPeerAddress: AssignAct, Next; Null & 0 = 0: Gosub, classify; Null & 0 = 0: NoMatch, 0; # 1 ait-ua Null & 0 = 0: Ignore, 0; # 2 ait-ait Null & 0 = 0: GotoAct, ok_pkt; # 3 ait-other # ok_pkt: Null & 0 = 0: Count, 0; The subroutine begins at the rule labelled classify (shown below). It returns to the first, second or third rule after the invoking Gosub rule, depending on whether the tested PeerAddress is in the UA, AIT, or 'other' group of networks. In the listing below only one network is tested in each of the groups - it is trivial to add more rules (one per network) into either of the first two groups. In this example the subroutine Pushes the network number from the packet into the tested attribute before returning. The first invocation of classify (above) begins at the rule labelled ip_pkt. It Assigns SourcePeerAddress to V1 then executes a Gosub action. Classify returns to one of the three following rules. They will Goto from_ua or from_ait if the packet came from the UA or AIT groups, otherwise the PME will retry the match. This means that matched flows will have a UA or AIT network as their source, and flows between other networks will be ignored. The next two invocations of 'classify' test the packet's DestPeerAddress. Packets from AIT to UA are Retried, forcing them to be counted as AU to AIT flows. Packets from UA to UA are ignored, as are packets from AIT to AIT. classify: v1 & 255.255.0.0 = 220.127.116.11: GotoAct, ua; # ua v1 & 255.255.0.0 = 18.104.22.168: GotoAct, ait; # ait Null & 0 = 0: Return, 3; # other ua: v1 & 255.255.0.0 = 0: PushPkttoAct, Next; Null & 0 = 0: Return, 1; ait: v1 & 255.255.0.0 = 0: PushPkttoAct, Next; Null & 0 = 0: Return, 2; 3.4 More complicated rule sets The next example demonstrates a way of grouping IP flows together depending on their Transport Address, i.e. their IP port number. Simply Pushing every flow's SourceTransAddress and DestTransAddress would produce a large number of flows, most of which differ only in one of their transport addresses (the one which is not a well-known port). Instead we Push the well-known port number into each flow's SourceTransAddress; its DestTransAddress will be zero by default. SourcePeerType & 255 = dummy: Ignore, 0; SourcePeerType & 255 = IP: Pushto, IP_pkt; Null & 0 = 0: GotoAct, Next; SourcePeerType & 255 = 0: PushPkttoAct, Next; Null & 0 = 0: Count, 0; # Count others by protocol type # IP_pkt: SourceTransType & 255 = tcp: Pushto, tcp_udp; SourceTransType & 255 = udp: Pushto, tcp_udp; SourceTransType & 255 = icmp: CountPkt, 0; SourceTransType & 255 = ospf: CountPkt, 0; Null & 0 = 0: GotoAct, c_unknown; # Unknown transport type # tcp_udp: s_domain: SourceTransAddress & 255.255 = domain: PushtoAct, c_well_known; s_ftp: SourceTransAddress & 255.255 = ftp: PushtoAct, c_well_known; s_imap: SourceTransAddress & 255.255 = 113: PushtoAct, c_well_known; s_nfs SourceTransAddress & 255.255 = 2049: PushtoAct, c_well_known; s_pop: SourceTransAddress & 255.255 = 110: PushtoAct, c_well_known; s_smtp: SourceTransAddress & 255.255 = smtp: PushtoAct, c_well_known; s_telnet: SourceTransAddress & 255.255 = telnet: PushtoAct, c_well_known; s_www: SourceTransAddress & 255.255 = www: PushtoAct, c_well_known; s_xwin SourceTransAddress & 255.255 = 6000: PushtoAct, c_well_known; # DestTransAddress & 255.255 = domain: GotoAct, s_domain; DestTransAddress & 255.255 = ftp: GotoAct, s_ftp; DestTransAddress & 255.255 = 113: GotoAct, s_imap; DestTransAddress & 255.255 = 2049: GotoAct, s_nfs; DestTransAddress & 255.255 = 110: GotoAct, s_pop; DestTransAddress & 255.255 = smtp: GotoAct, s_smtp; DestTransAddress & 255.255 = telnet: GotoAct, s_telnet; DestTransAddress & 255.255 = www: GotoAct, s_www; DestTransAddress & 255.255 = 6000: GotoAct, s_xwin; # Null & 0 = 0: GotoAct, c_unknown; # 'Unusual' port # c_unknown: SourceTransType & 255 = 0: PushPkttoAct, Next; DestTransType & 255 = 0: PushPkttoAct, Next; SourceTransAddress & 255.255 = 0: PushPkttoAct, Next; DestTransAddress & 255.255 = 0: CountPkt, 0; # c_well_known: Null & 0 = 0: Count, 0 # The first few rules ignore dummy packets, select IP packets for further processing, and count packets for other protocols in a single flow for each PeerType. TCP and UDP packets cause the PME to Push their TransType and Goto tcp_udp. ICMP and OSPF packets are counted in flows which have only their TransType Pushed. At tcp_udp the packets' SourceTransAddress is tested to see whether it is included in a set of 'interesting' port numbers. If it is, the port number is pushed from the rule into the SourceTransAddress attribute, and the packet is counted at c_well_known. (NeMaC accepts Pushto as a synonym for PushRuleto). This testing is repeated for the packet's DestTransAddress; if one of these tests succeeds the PME Goes to the corresponding rule above and Pushes the port number into the flow's SourceTransAddress. If these tests fail the packet is counted at c_unknown, where all the flow's Trans attributes are pushed. For production use more well-known ports would need to be included in the tests above - c_unknown is intended only for little-used exception flows! Note that these rules only Push a value into a flow's SourceTransAddress, and they don't contain any NoMatch actions. They therefore don't specify a packet's direction, and they could be used in other rule sets to group together flows for well-known ports. The last example (below) meters flows from a remote router, and demonstrates another approach to grouping well-known ports. SourceAdjacentAddress & FF-FF-FF-FF-FF-FF = 00-60-3E-10-E0-A1: Goto, gateway; # tmkr2 router DestAdjacentAddress & FF-FF-FF-FF-FF-FF = 00-60-3E-10-E0-A1: Goto, gateway; # Source is tmkr2 Null & 0 = 0: Ignore, 0; # gateway: SourcePeerType & 255 = IP: GotoAct, IP_pkt; Null & 0 = 0: GotoAct, Next; SourcePeerType & 255 = 0: CountPkt, 0; # IP_pkt: SourceTransType & 255 = tcp: PushRuleto, tcp_udp; SourceTransType & 255 = udp: PushRuleto, tcp_udp; Null & 0 = 0: GotoAct, not_wkp; # Unknown transport type # tcp_udp: SourceTransAddress & FC-00 = 0: GotoAct, well_known_port; DestTransAddress & FC-00 = 0: NoMatch, 0; Null & 0 = 0: GotoAct, not_wkp; # not_wkp: DestTransAddress & 255.255 = 0: PushPkttoAct, Next; well_known_port: SourcePeerType & 255 = 0: PushPkttoAct, Next; DestPeerType & 255 = 0: PushPkttoAct, Next; SourcePeerAddress & 255.255.255.0 = 0: PushPkttoAct, Next; DestPeerAddress & 255.255.255.0 = 0: PushPkttoAct, Next; SourceTransType & 255 = 0: PushPkttoAct, Next; DestTransType & 255 = 0: PushPkttoAct, Next; SourceTransAddress & 255.255 = 0: CountPkt, 0; The first group of rules test incoming packet's AdjacentAddresses to see whether they belong to a flow with an end point at the specified router. Any which don't are ignored. Non-IP packets are counted in flows which only have their PeerType Pushed; these will produce one flow for each non-IP protocol. IP packets with TransTypes other than UDP and TCP are counted at not_wkp, where all their address attributes are pushed. The high-order six bits of SourceTransAddress for UDP and TCP packets are compared with zero. If this succeeds their source port number is less than 1024, so they are from a well-known port. The port number is pushed from the rule into the flow's SourceTransAddress attribute, and the packet is counted at well_known_port. If the test fails, it is repeated on the packet's DestTransAddress. If the destination is a well-known port the match is Retried, and will succeed with the well-known port as the flow's source. If later analysis were to show that a high proportion of the observed flows were from non-well-known ports, further pairs of rules could be added to perform a test in each direction for other heavily-used ports. 4 Flow data files Although the Architecture document specifies - in great detail - how the Traffic Flow Meter works, and how a meter reader should collect flow data from a meter, it does not say anything about how the collected data should be stored. NeMaC uses a simple, self- documenting file format, which has proved to be very effective in use. There are two kinds of records in a flow data file: flow records and information records. Each flow record is simply a sequence of attribute values with separators (these can be specified in a NeMaC rule file) or spaces between them, terminated by a newline. Information records all start with a cross-hatch. The file's first record begins with ##, and identifies the file as being a file of data from NeTraMet. It records NeMaC's parameters and the time this collection was started. The file's second record begins with #Format: and is a copy of the Format statement used by NeMaC to collect the data. The rest of the file is a sequence of collected data sets. Each of these starts with a #Time: record, giving the time-of-day the collection was started, the meter name, and the range of meter times this collection represents. These from and to times are meter UpTimes, i.e. they are times in hundredths of seconds since the meter commenced operation. Most analysis applications have simply used the collection start times (which are ASCII time-of-day values), but the from and to times could be used to convert Uptime values to time-of- day. The flow records which comprise a data set follow the #Time record. 4.1 Sample flow data file A sample flow data file appears below. Most of the flow records have been deleted, but lines of dots show where they were. ##NeTraMet v3.2. -c300 -r rules.lan -e rules.default test_meter -i eth0 4000 flows starting at 12:31:27 Wed 1 Feb 95 #Format: flowruleset flowindex firsttime sourcepeertype sourcepeeraddress destpeeraddress topdus frompdus tooctets fromoctets #Time: 12:31:27 Wed 1 Feb 95 22.214.171.124 Flows from 1 to 3642 1 2 13 5 126.96.36.199 188.8.131.52 1138 0 121824 0 1 3 13 2 184.108.40.206 220.127.116.11 4215 0 689711 0 1 4 13 7 18.104.22.168 22.214.171.124 1432 0 411712 0 1 5 13 6 126.96.36.199 188.8.131.52 8243 0 4338744 0 3 6 3560 2 184.108.40.206 220.127.116.11 0 10 0 1053 3 7 3560 2 18.104.22.168 22.214.171.124 59 65 4286 3796 3 8 3560 7 0.0.255.0 126.96.36.199 0 4 0 222 3 9 3560 2 188.8.131.52 184.108.40.206 118 1 32060 60 3 10 3560 6 220.127.116.11 18.104.22.168 782 1 344620 66 3 11 3560 7 0.0.255.0 0.128.113.0 0 1 0 73 3 12 3560 5 22.214.171.124 126.96.36.199 1 1 60 60 3 13 3560 7 0.128.94.0 0.129.27.0 2 2 120 158 3 14 3560 5 188.8.131.52 184.108.40.206 2 2 120 120 3 15 3560 5 0.0.0.0 220.127.116.11 0 1 0 60 3 16 3560 5 18.104.22.168 22.214.171.124 2 2 120 120 . . . . . . . . . 3 42 3560 7 0.128.42.0 0.129.34.0 0 1 0 60 3 43 3560 7 0.128.42.0 0.128.43.0 0 1 0 60 3 44 3560 7 0.128.42.0 0.128.41.0 0 1 0 60 3 45 3560 7 0.128.42.0 0.129.2.0 0 1 0 60 3 46 3560 5 126.96.36.199 188.8.131.52 2 2 120 120 3 47 3560 5 184.108.40.206 220.127.116.11 2 2 120 120 3 48 3560 5 18.104.22.168 22.214.171.124 2 2 120 120 3 49 3560 5 0.0.0.0 126.96.36.199 0 1 0 60 3 50 3664 5 188.8.131.52 184.108.40.206 0 1 0 60 3 51 3664 5 0.0.0.0 220.127.116.11 0 1 0 60 3 52 3664 5 18.104.22.168 22.214.171.124 4 4 240 240 #Time: 12:36:25 Wed 1 Feb 95 126.96.36.199 Flows from 3641 to 33420 3 6 3560 2 188.8.131.52 184.108.40.206 0 21 0 2378 3 7 3560 2 220.127.116.11 18.104.22.168 9586 7148 1111118 565274 3 8 3560 7 0.0.255.0 22.214.171.124 0 26 0 1983 3 9 3560 2 126.96.36.199 188.8.131.52 10596 1 2792846 60 3 10 3560 6 184.108.40.206 220.127.116.11 16589 1 7878902 66 3 11 3560 7 0.0.255.0 0.128.113.0 0 87 0 16848 3 12 3560 5 18.104.22.168 22.214.171.124 20 20 1200 1200 3 13 3560 7 0.128.94.0 0.129.27.0 15 14 900 1144 3 14 3560 5 126.96.36.199 188.8.131.52 38 38 2280 2280 3 15 3560 5 0.0.0.0 184.108.40.206 0 30 0 1800 3 16 3560 5 220.127.116.11 18.104.22.168 20 20 1200 1200 3 17 3560 5 0.0.0.0 22.214.171.124 0 11 0 660 . . . . . . . . . 3 476 26162 7 0.129.113.0 0.128.37.0 0 1 0 82 3 477 27628 7 0.128.41.0 0.128.46.0 1 1 543 543 3 478 27732 7 0.128.211.0 0.128.46.0 1 1 543 543 3 479 31048 7 0.128.47.0 126.96.36.199 1 1 60 60 3 480 32717 2 188.8.131.52 184.108.40.206 0 4 0 240 3 481 32717 2 220.127.116.11 18.104.22.168 0 232 0 16240 #Time: 12:41:25 Wed 1 Feb 95 22.214.171.124 Flows from 33419 to 63384 3 6 3560 2 126.96.36.199 188.8.131.52 51 180 3079 138195 3 7 3560 2 184.108.40.206 220.127.116.11 21842 18428 2467693 1356570 3 8 3560 7 0.0.255.0 18.104.22.168 0 30 0 2282 3 9 3560 2 22.214.171.124 126.96.36.199 24980 1 5051834 60 3 10 3560 6 188.8.131.52 184.108.40.206 20087 1 8800070 66 3 11 3560 7 0.0.255.0 0.128.113.0 0 164 0 32608 3 12 3560 5 220.127.116.11 18.104.22.168 41 41 2460 2460 3 14 3560 5 22.214.171.124 126.96.36.199 82 82 4920 4920 3 15 3560 5 0.0.0.0 188.8.131.52 0 60 0 3600 . . . . . . . . . 4.2 Flow data file features Several features of NeMaC's flow data files (as indicated above) are worthy of note: - Collection times overlap slightly between samples. This allows for flows which were created after the collection started, and makes sure that flows are not missed from a collection. - The rule set may change during a run. The above shows flows from rule set 1 - the default set - in the first collection, followed by the first flows created by rule set 3 (which has just been downloaded by NeMaC). - FlowIndexes may be reused by the meter once their flows have been recovered by the garbage collector. The combination of FlowRuleSet, FlowIndex and StartTime are needed to identify a flow uniquely. - Packet and Byte counters are 32-bit unsigned integers, and are never reset by the meter. Computing the counts occurring within a collection interval requires taking the difference between the collected count and its value when the flow was last collected. Note that counter wrap-around can be allowed for by simply performing an unsigned subtraction and ignoring any carry. - In the sample flow data file above I have used double spaces as separators between the flow identifiers, peer addresses, pdu counts and packet counts. - The format of addresses in the flow data file depends on the type of address. NeMaC always displays Adjacent addresses as six hex bytes separated by hyphens, and Transport addresses as (16-bit) integers. The format of a Peer address depends on its PeerType, e.g. dotted decimal for IP. To facilitate this NeMaC needs to know the PeerType for each flow; the user must request NeMaC to collect it. 4.3 Terminating and restarting meter reading When NeMaC first starts collecting from a meter, it reads the flow data for all active flows. This provides a starting point for analysis applications to compute the counts between successive collections. From time to time the user needs to terminate a flow data file and begin a new one. For example, a user might need to generate a separate file for each day of metering. NeMaC provides for this by closing the file after each collection, then opening it and appending the data from the next collection. To terminate a file the user simply renames it. The Unix system will effect the name change either immediately (if the file was closed) or as soon as the current collection is complete (and the file is closed). When NeMaC begins its next collection it observes that the file has disappeared, so it creates a new one and writes the # header records before writing the collected data. There is one aspect of the above which requires some care on the user's part. The last data set in a file is not duplicated as the first data set of the next file. In other words, analysis applications must either look ahead at the first data set of the next file, or begin by reading the last data set of the previous file. If they fail to do this they will loose one collection's worth of flow data at each change of file. 5 Analysis applications Most analysis applications will be unique, taking data produced by a locally-developed rule set and producing reports to satisfy specific local requirements. The NeTraMet distribution files include three applications which are of general use, as follows: - fd_filter computes data rates, i.e. the differences between successive data sets in a flow data file. It also allows the user to assign a 'tag' number to each flow; these are 'computed' attributes similar to FlowClass and FlowKind - the only difference is that they are computed from the collected data sets. - fd_extract takes 'tagged' files from fd_filter and produces simple 'column list' files for use by other programs. One common use for fd_extract is to produce time-series data files which can be plotted by utilities like GNUPlot. - nm_rc is a 'remote console' for a NeTraMet meter. It is a slightly simplified version of NeMaC combined with fd_filter. It can be used to monitor any meter, and will display (as lines of text characters) information about the n busiest flows observed during each collection interval. - nifty is a traffic flow analyser, which (like nm_rc) displays data from a NeTraMet meter. nifty is an X/Motif application, which produces displays like 'Packet rate (pps) vs Flow lifetime (minutes),' so as to highlight those flows which are 'interesting.' These applications are useful in themselves, and they provide a good starting point for users who wish to write their own analysis applications. 6 Using NeTraMet in a measurement system This section gives a brief summary of the steps involved in setting up a traffic measurement system using NeTraMet. These are: - Decide what is to be measured. One good way to approach this is to specify exactly which flows are to be measured, and what reports will be required. Specifying the flows should make it obvious where meters will have to be placed so that the flows can be observed, whether PCs will be adequate for the task, etc.. - Install meters. As well as actually placing the meter hosts this includes making sure that they are configured correctly, with appropriate IP addresses, SNMP community strings, etc. - Develop the rule set (and a standby rule set). The degree of difficulty here depends on how much is known in advance about the traffic. One possible approach is to start with the meter default rule set and measure how much traffic there is for each PeerType. (This is a good way to verify that NeTraMet and NeMaC are working properly). You can now add rules so as to increase the granularity of the flows; this will of course increase the number of flows to be collected, and force the meter's garbage collector to work harder. Another approach is to try a rule set with very fine granularity (i.e. one which Pushes all the address attributes), then observing how many flows are collected every few minutes. - Develop a strategy for controlling meter reader. This means setting the meter's maximum number of flows, the collection interval, how breaks between flow data files will be handled, how often NeMaC should check that the meter is running, etc. - Develop application(s) to process the collected flow data and produce the required files and reports. - Test run. Monitor the system, then refine the rule sets and meter reading strategy until the overall system performance is satisfactory. This process can take quite a long time, but the overall result is well worth the effort. 6.1 Examples of NeTraMet in production use At the University of Auckland we run two sets of meters. One of these measures the traffic entering and leaving our University network, and generates usage reports for all our Internet users. This has been in production since early 1994. The other set consists of meters which are distributed at Universities throughout New Zealand. They provide continuous traffic flow measurements at five-minute intervals for all the links making up the Universities' network (Kawaihiko); this system has been in production since January 1996, and has already proved very useful in planning the network's development. The Kawaihiko Network provides IP connectivity for the New Zealand Universities. They are linked via a Frame Relay cloud, using a partial mesh of permanent virtual circuits. There is a NeTraMet meter at each site, metering inward and outward traffic. All the meters are managed from Auckland, and they all run copies of the same rule set. The rule set has about 650 rules, most of which are in a single subroutine which classifies PeerAddresses into three categories - 'Kawaihiko network,' 'other New Zealand network' and 'non-New Zealand network.' Inside New Zealand IP addresses lie within six CIDR blocks, and there are about four hundred older networks which have addresses outside those blocks. The rules are arranged in groups by subnet size, i.e. all the /24 networks are tested first, then the /23 networks, etc, finishing with the /16 networks. This means that although there are about 600 networks, any PeerAddress can be classified with only nine tests. The Kawaihiko rule set classifies flows, using computed attributes to indicate the network 'kind' (Kawaihiko / New Zealand / international) for each flow's SourcePeerAddress and DestPeerAddress, and to indicate whether the flow is a 'network news' flow or not. Flow data is collected from all of the meters every five minutes, and is used to produce weekly reports, as follows: - Traffic Plots. Plots of the 5-minute traffic rates for each site, showing international traffic in and out, news traffic in and out, and total traffic in and out of the site. - Traffic Matrices. Two of these are produced, one for news traffic, the other for total traffic. They show the traffic rates from every site (including 'other New Zealand' and 'international') to every other site. The mean, third quartile and maximum are printed for every cell in the matrices. This memo documents the implementation work on traffic flow measurement here at the University of Auckland. Many of my University colleagues have contributed significantly to this work, especially Russell Fulton (who developed the rules sets, Perl scripts and Cron jobs which produce our traffic usage reports automatically week after week) and John White (for his patient help in documenting the project). Brownlee, N., Mills, C., and G. Ruth, "Traffic Flow Measurement: Architecture", RFC 2063, The University of Auckland, Bolt Beranek and Newman Inc., GTE Laboratories, Inc, January 1997. Brownlee, N., "Traffic Flow Measurement: Meter MIB", RFC 2064, The University of Auckland, January 1997. CRYNWR Packer Drivers distribution site: http://www.crynwr.com/ Case J., McCloghrie K., Rose M., and Waldbusser S., "Structure of Management Information for version 2 of the Simple Network Managemenet Protocol", RFC 1902, SNMP Research Inc., Hughes LAN Systems, Dover Beach Consulting, Carnegie Mellon University, April 1993. IBM Corporation, "IBM PC Technical Reference Manual," 1984. Waterloo TCP distribution site: http://mvmpc9.ciw.uni-karlsruhe.de:80/d:/public/tcp_ip/wattcp CMU SNMP distribution site: ftp://lancaster.andrew.cmu.edu/pub/snmp-dist libpcap distribution site: ftp://ftp.ee.lbl.gov/libpcap-*.tar.gz 9 Security Considerations Security issues are not discussed in detail in this document. The meter's management and collection protocols are responsible for providing sufficient data integrity and confidentiality. 10 Author's Address The University of Auckland Phone: +64 9 373 7599 x8941 Email: [email protected]
Configuring VPN Remote Access for the first time on your Sophos XG Firewall? Check out this useful Community post! Sophos EDR enabled devices are continually capturing data related to process, file, network and other system activity. When a threat detection occurs, a snapshot file of current activity is created on the disk of the device. This snapshot helps generate the Threat Case in Sophos Central, which attempts to piece together the threat chain of an attack and identify related activities. EDR enabled customers have the ability to create Forensic Snapshots and perform detailed analysis on demand. Note: To analyse the snapshot you'll first need to convert it into a usable format using a tool that Sophos provides. The following sections are covered: Admins can generate a forensic snapshot from within two areas in the Sophos Central Console or from within Threat Cases. For Endpoints: From Sophos Central Admin > Endpoint Protection > Computers, select the endpoint that you want to generate a snapshot for. In the Status tab select the link to Create forensic snapshot. For Servers: From Sophos Central Admin > Server Protection > Servers, select the server that you want to generate a snapshot for. In the Status tab select the link to Create forensic snapshot. From Sophos Central Admin > Threat Analysis Center > Threat Cases, select a Threat Case associated to the device you want to generate a snapshot for. Once in the Threat Case at the top of the artifact table, click the link to Create forensic snapshot. Customer generated forensic snapshots can be located in the %PROGRAMDATA%\Sophos\Endpoint Defense\Data\Forensic Snapshots\ directory. %PROGRAMDATA%\Sophos\Endpoint Defense\Data\Forensic Snapshots\ Snapshots based on detections can be located in the %PROGRAMDATA%\Sophos\Endpoint Defense\Data\Saved Data\ directory. %PROGRAMDATA%\Sophos\Endpoint Defense\Data\Saved Data\ Note: With tamper protection enabled admins must be running from an elevated command prompt to get access to saved snapshots. The SDR Exporter utility is the tool used to convert snapshots on a device into a format where advanced queries can be run. The snapshots can then be converted to a SQLite database or a JSON formatted file. The tool is available from the Sophos downloads. There is a 64 bit version and 32 bit version of the tool available and due to changes in functionality an updated version has been provided as detailed below: The minimal usage for the tool would be to specify the path and filename of the snapshot to be converted with path and filename of the output file and the requested format as seen below: 64 bit: SDRExporterx64.exe –i <path to snapshot tgz> -o <path to output file> -f <format to output sqlite or json> 32 bit: SDRExporterx86.exe –i <path to snapshot tgz> -o <path to output file> -f <format to output sqlite or json> SDRExporterx64.exe –i <path to snapshot tgz> -o <path to output file> -f <format to output sqlite or json> SDRExporterx86.exe –i <path to snapshot tgz> -o <path to output file> -f <format to output sqlite or json> Help for the tool can be seen by running the command SDRExporter.exe –h command: -h [ --help ] -i [ --input-path ] -o [ --output-path ] -f [ --output-format ] -v [ --output-version ] Note: This functionality requires Core Agent 2.5.0 and above. By default, snapshots are saved on the local computer. You can upload snapshots to an AWS S3 bucket instead. This lets you access your snapshots easily in a central location, rather than going to each computer. This requires you to have an available AWS S3 bucket, create a new Policy and IAM Role to allow snapshots to be uploaded to the S3 bucket. Create a manged policy: Add the AWS Account to Sophos Central: Creating a bucket policy While it is not a Sophos requirement for the upload of forensic data we do recommended you create a bucket policy to apply restrictions on a bucket. The following is an example policy to restrict access to the bucket contents: Are there any issues that I should be aware of? If you've spotted an error or would like to provide feedback on this article, please use the section below to rate and comment on the article. This is invaluable to us to ensure that we continually strive to give our customers the best information possible. Every comment submitted here is read (by a human) but we do not reply to specific technical questions. For technical support post a question to the community. Or click here for new feature/product improvements. Alternatively for paid/licensed products open a support ticket.
MITRE is an unbiased and respected organization that performs a valuable service to the cybersecurity community. The MITRE ATT&CK evaluation is an industry standard, and the industry can use all the help it can get to identify the tactics and techniques employed by cybercriminals. (See The Cyber Threat Landscape for 2022 Darkens.) MITRE helps unite efforts by governmental organizations, academics, and vendors to develop strong defense mechanisms. Even so, should cybersecurity leaders take the results provided in the recent MITRE ATT&CK Engenuity tests as gospel? My view is that while the tests have merit, they only offer part of the picture. Caution is warranted when evaluating each vendor’s interpretation of the results. Organizations seeking to improve their cybersecurity posture may well want to review the raw results, but using a vendor’s analysis as the sole basis for making a security solution purchase is likely unwise. There’s one overriding reason for this, which I’ll get to. But let’s start by examining the raw results based on what MITRE tested, which vendors participated, and how they fared. Details of the MITRE ATT&CK Engenuity Evaluation The MITRE ATT&CK Engenuity tests for the Wizard Spider and Sandworm Edition evaluated the detection and prevention capabilities
Skip to Main Content An important method used to speed up forensic file-system analysis is white-listing of files: Well-known files are detected using signatures (message digests) or similar methods, and omitted from further analysis initially, in order to better focus the initial analysis on files likely to be more important. Typical examples of such well-known files include files used by operating systems, popular applications, and software libraries. This paper presents methods for improving the effectiveness and efficiency of such signature-based white-listing during file-system forensics. One concern for effectiveness is the resilience of the white-listing method to an adversary who has complete knowledge of the method and who may make small, inconsequential changes to a large number of well-known files on a target file-system in order to overload the analysis and thereby practically defeat it. Another concern is the ability to detect near-matches in addition to exact matches. Efficiency refers to primarily the rate at which a target file system may be processed during analysis; preparation-time, or indexing, efficiency is a lesser concern as that computation may be performed during non-critical times. Our work builds on techniques such as locality-sensitive hashing to yield an effective filter for further analysis tools.
Infoblox has launched a hybrid security solution that uses DNS to detect and counter threats. The solution, BloxOne Threat Defense, comes with a scalable, hybrid architecture. Security solutions need to secure existing networks, as well as digital transformation technology such as the cloud, IoT and SDWAN. According to Infoblox, DNS provides the ideal basis for security, because it is present in every network, is necessary for connectivity and can scale with the size of the Internet. With BloxOne Threat Defense, corporate networks must always be secure. In doing so, the solution uses a company’s existing infrastructure. BloxOne Threat Defense makes it possible to easily monitor recurring DNS traffic in one central location, whether users are on-site or remote. This requires real-time threat detection and rapid response. The solution uses threat information and analyses based on machine learning. It detects ransomware, phishing, malware, exploit kits, fast-flux attacks, and more. The hybrid approach also enables organizations to use the cloud to detect more threats, while also providing them with more insights and full integration with the local ecosystem. BloxOne Threat Defense is part of Infoblox’s ActiveTrust Suite. It helps customers reduce the overall cost of their threat protection. This is done by taking over the activity of static perimeter security such as Next Gen firewalls, IPS and web proxies. This is because unsafe traffic to the solutions is reduced by using existing DNS servers as a first line of security. It is also necessary to reduce the response time by two-thirds in the event of an incident, as reactions to abnormal behaviour can be automated. It also blocks cyber threats and provides the right data to investigate the ecosystem more efficiently. SOAR/SIEM networks can be strengthened by deploying DNS, DHCP, and IPAM data from such platforms to prioritize threats based on threat level and act accordingly. Finally, analysts need to become three times more productive, as the automated threat triangle, related threat insights, and location and cybercriminal information allow them to make faster and better decisions. It also reduces the number of human errors.This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.
In the previous article, I looked at the requirements and features of notarization, drawing attention to the apps which were notarized under legacy rules and therefore don’t meet the same standards. Here I move on to consider one of those requirements in detail: the hardened runtime. Apple doesn’t appear to have fully explained how the runtime environment of notarised apps is ‘hardened’. The best description it gives to developers is that it “protects the runtime integrity of your software by preventing certain classes of exploits, like code injection, dynamically linked library (DLL) hijacking, and process memory space tampering.” Those types of exploit could be used by an attacker trying to use third-party software for malicious purposes, or as techniques within their own malicious code, of course. Among the behaviours which are prohibited in a hardened environment are: - creating writeable and executable memory, as used in just-in-time (JIT) compilation; - code injection using DYLD environment variables, which seems unlikely to be anything other than malicious; - loading plug-ins and frameworks signed by other developers, a common practice in apps which support third-party executable extensions; - modifying executable code in memory; - attachment to other processes, or getting task ports, which is typically required by a debugger. So the perfect app only runs its own signed code, which is never changed in memory, and all that code has been checked for malicious software. Nothing else should be able to alter that model behaviour. These entitlements are: - com.apple.security.cs.allow-jit, which allows JIT code; - com.apple.security.cs.allow-unsigned-executable-memory, which allows unsigned executable memory; - com.apple.security.cs.allow-dyld-environment-variables, which allows DYLD environemnt variables; - com.apple.security.cs.disable-library-validation, which disables library validation; - com.apple.security.cs.disable-executable-page-protection, which disables executable memory protection; - com.apple.security.cs.debugger, which declares the app to be a debugging tool. There’s another important entitlement in the context of notarized apps: you should never come across com.apple.security.get-task-allow, which enables that app to be run in Xcode’s debug environment. You should also be aware that the entitlement indicating that an app runs in a sandbox is com.apple.security.app-sandbox. Apps which are both notarized and sandboxed are becoming increasingly common, as they can be distributed through the App Store and in direct sales. Apps which use the full hardened environment have none of those entitlements. All hardened apps, even those which claim all six opt-outs, show a CodeDirectory flag in their signature of 0x10000(runtime), which is the mark of the hardened app. When hardening is disabled, the flags given for the CodeDirectory are often 0x0(none), but can always include others such as ‘kill’. Currently, the only method provided by macOS to discover whether an app uses the hardened runtime, and which entitlements it takes, is the codesign command tool, as: codesign --display --entitlements :- appPath where appPath gives the app’s full pathname. Thus there is no convenient method to discover what opt-outs a notarized app uses. Some third-party tools do list them: my own Taccy does, together with a detailed account of the other half of the hardened environment, its privacy settings. I’ll look at those in detail in the next article.
- Abstract: In this paper, we conduct an intriguing experimental study about the physical adversarial attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors. Our approach alternates between two threads. In the first, we train a neural approximation function to imitate how a simulator applies a camouflage to vehicles and how a vehicle detector performs given images of the camouflaged vehicles. In the second, we minimize the approximated detection score by searching for the optimal camouflage. Experiments show that the learned camouflage can not only hide a vehicle from the image-based detectors under many test cases but also generalizes to different environments, vehicles, and object detectors. - Keywords: Adversarial Attack, Object Detection, Synthetic Simulation - TL;DR: We propose a method to learn physical vehicle camouflage to adversarially attack object detectors in the wild. We find our camouflage effective and transferable.
Palo Alto Networks maps third-party services and data centers to allow flexibility when creating network policy rules to account for uniqueness across sites. For example, you may create a single network policy that directs all HTTP and SSL internet bound traffic through the primary cloud security service in the region if available. If the primary cloud service is not available, you may leverage the backup cloud security service in the region. You may have different primary and backup cloud security service endpoints based on your geographic location. The intent and the policy rules remains the same regardless of the site location. The illustration below displays how endpoints, added to a group, are associated with a domain. The domains are bound to a site, thus uniquely mapping third-party services or data centers to each site. You can map a group, with different endpoints, to one or more domains and map a domain to one or more sites. A site can use only the endpoints configured in a group within a domain that is assigned to the site. The same group, however, can be in multiple domains with different service endpoints, which allows you to use the same policy across different sites utilizing
You can protect cloud data and resources with the help of Cloud Security Posture Management (CSPM). To provide ongoing visibility, you may incorporate CSPM into your development phase. For DevOps processes, which rely mainly on automation, CSPM is very advantageous. With CSPM, you can create cloud auditing processes and benchmarks, automate misconfiguration repair, and pinpoint hazards throughout your cloud architecture. Cloud Security Posture Management (CSPM): What Is It? You may utilize Cloud Security Posture Management (CSPM) as a collection of procedures and techniques to guarantee the security of your cloud resources and data. It is a development of Cloud Infrastructure Security Posture Assessment (CISPA) that involves many layers of automation and an emphasis on essential surveillance. Implementing CSPM may do for DevOps integrations, incident response, continuous monitoring, compliance evaluations, and risk identification and visualization. Ideally, CSPM should support governance, accountability, and security while assisting you in continually managing your cloud-based risk. Additionally, working container-based or cross setups might benefit significantly from it. How Come CSPM Is Important? According to a Gartner study, CSPM solutions may cut the number of cloud security incidents involving incorrect setups by as much as 80%. You can monitor changing cloud environments using CSPM solutions and spot inconsistencies between your security posture and rules. By using these technologies, you can lessen the likelihood that your systems will be compromised and the amount of damage that attackers will be able to do if they are successful. You may improve the security of your apps and deployments by integrating CSPM technologies into your development processes. The following are the most frequent advantages that CSPM brings to organizations: For cloud settings, regular security testing Automatic correction of misconfiguration Benchmark and compliance assessments to confirm best practices Constant monitoring of all cloud environments The following are just a few of the biggest dangers to your environments that CSPM solutions may assist you in identifying: Data or networking encryption is insufficient or nonexistent. Incorrect encryption key handling Inadequate authentication procedures Inadequate or absent network access controls Storage access that is open to the public Absence of event tracking or logging Why CSPMS Should Be Used Any firm using the cloud should take into account CSPM solutions. However, specific organizations can profit more than others. These consist of: Organizations with heavy or essential workloads are a target for attackers to target since they have more data and more vital processes. Furthermore, because more people and data depend on you, the number of fines or lost income in the event of an issue might be substantial. With the aid of CSPM, you can ensure that all company resources keep safe and concentrate additional security efforts on crucial tasks. Multiple cloud service accounts inside an organization increase the risk of misconfigurations and lack of consistency. With the aid of CSPM, you can stop attackers from leveraging these openings to migrate laterally and get access to one group of resources, which may give them access to your whole business. Organizations operating in highly regulated sectors may find it challenging to maintain compliance in the cloud due to regional data distribution, accessibility from anywhere in the world, and little control over the infrastructure. You may audit your resources with the aid of CSPM to ensure they are compliant and demonstrate it. Best Practices For CSPM There are a few recommended practices you ought to include while adopting CSPM. These procedures can assist you in prioritizing your work, maximizing the benefits of automation, and ensuring policy compliance. Automate benchmarking compliance Solutions and techniques for CSPM that provide automated capacity benchmarking and monitoring should be used. As soon as parts build, you should be ready to benchmark them using this functionality’s web service characteristics. Set your priorities based on the level of danger. It might be tempting to solve problems as you come across them while dealing with security concerns and vulnerabilities. However, the sequence in which you find problems frequently doesn’t correspond to the level of risk such issues provide. It would help to prioritise your risk levels rather than focusing on tiny concerns while more significant problems go undiscovered. Implement security checks in the pipelines for development Workflows should include security screening if you use DevOps pipelines to create software. If you’re not careful, the environment development and product delivery rate in these settings might quickly overwhelm you with risks. You may discover hazards, obtain continuous insight into your cloud computing, and automate the correction of misconfigurations with the aid of CSPM. Critical cloud workloads may safeguard with CSPM across various platforms and cloud providers.
Re: Expert Rules.. When creating an expert PROGRAM rule the last rule should be one to block everything, because program rules are ALL aplied to the traffic, so if the last rule doen't bock evgerything, then you are basically opening up the firewall for that program. Now this doesn't Apply for ZONE exper rules. They are applied until there is a single match then then rest are ignored. Forgot to mention, Tema Z is a group of users that help out ZoneAlarm users in forums around the net. For this we get a few perks. http://www.zonelabs.com/store/content/company/teamz.jsp Message Edited by Hoov on 07-06-2003 11:17 PM My homes are [url=http://spywarehammer.com/simplemachinesforum/index.php][b][color=#000099]SpywareHammer.com[/color][/b][/url] and [url=http://zonealarm.donhoover.net/index.html][b][color=#000099]DonHoover.net[/color][/b][/url] and [url=http://www.bleepingcomputer.com/][b][color=#000099]BleepingComputer.com[/color][/b][/url] Consumer Security - 2011 & 2012 Tilting at windmills hurts you more than the windmills. -From the Notebooks of Lazarus Long Senior of the Howard Families
This is a story about the SoakSoak malware campaign that proved that you can’t underestimate impact of security issues in popular premium software. These days, the majority of popular content management systems are 100% free: WordPress, Magento, Joomla, Drupal, etc. Moreover, most CMS extensions are also free. In fact, modern webmasters can build any type of site entirely through free software. Most popular software has thousands — or even millions — of installations. Their source code is open, and it’s easy for hackers to search for security bugs. It’s clear that when found, these vulnerabilities have the ability to impact a significant number of websites. On the other hand, premium software is also readily available and may be popular in certain niches where people are ready to pay money for important features. However, the source code for paid software is not available without buying it, which adds a barrier against bad actors seeking vulnerabilities to exploit. You can hardly expect premium software to be as prevalent as free, so the impact of exploited vulnerabilities should be lower. At the end of 2014, WordPress community learned that this isn’t always true — the hard way. Five years ago, on the weekend of December 13-14, 2014, we witnessed the first strike of SoakSoak tsunami. Literally overnight, tens of thousands WordPress sites got infected with malware that loaded a script from “hxxp://soaksoak[.]ru/xteas/code” (which gave SoakSoak the name to the whole wave of related infections). On Sunday, December 14, 2014, Google had already blacklisted 11,000+ domains for loading SoakSoak (we estimated the real number of affected websites much higher). On Monday, December 15, 2014, we found a new variation of the malware that loaded a Flash object containing an invisible iframe from “hxxp://milaprostaya[.]ru/images/”. A week later, on December 21, 2014, there was a new massive wave of infections. This time, malware loaded a Flash object (wp-includes/js/swfobjct.swf) and a script from “hxxp://ads .akeemdom .com/db26”. Several other less prominent malware campaigns using the same infection mechanism were also active at this time. Record Breaking Month for Sucuri Beginning December 14, 2014, Sucuri started to receive a lot of malware removal requests from affected websites. Typically a slow month, that December broke historical records for the number of websites we cleaned. The number of malware detections by our SiteCheck scanner doubled during the second part of December. Everyone who worked at Sucuri at that time, regardless of their title and position, worked in the queue cleaning websites. To cope with the increased volume of tickets, we had to improve our cleanup tools and overall cleanup process, which ultimately helped us scale to the number of clients we have now. Vulnerable Slider Revolution Plugin Shortly after we started receiving a massive influx in cleanup requests, we had identified the common component in all these infections along with the vulnerabilities that allowed hackers to compromise so many sites. Hackers actively exploited two vulnerabilities in a popular premium plugin called Slider Revolution (or RevSlider – the name of the directory it used). Published: September 1, 2014 Affected versions: < 4.2 Discovered: October 15, 2014 Published: November 26, 2014 Affected Versions: <= 3.0.95 (Revslider) / Version: <= 1.7.1 (Showbiz Pro) Both security holes had been patched long before the massive attacks began. - The file download vulnerability was fixed in February 2014. - The file upload security hole (exploited by SoakSoak) was fixed since 2013. While we’d seen attacks in the wild for a few months before SoakSoak, none of them were as massive. Why Was This Attack so Massive? You might be wondering why long patched vulnerabilities in a premium plugin made it possible to infect so many sites in a very short time. If we analyze all of the factors that contributed to the success of the SoakSoak attack, the reason how this massive infection occurred becomes clear. WordPress Market Share First of all, the market share of WordPress is huge. At the time of the attack, more than 60 million websites were estimated to be powered by WordPress. Even 1% would be more than half a million, which is a very impressive number. Most Popular Slider Plugin Slider Revolution is a premium plugin from ThemePunch that provides a highly customizable all-purpose slide displaying solution. At that time, it was the most popular slider plugin on Envato with over 50,000 sales. RevSlider Bundled with Thousands of Themes ThemePunch had a special ThemeForest license for developers who sell their themes on ThemeForest.net. At the end of 2014, 1,200 third-party themes contained RevSlider/ShowBiz plugins. During this time, Revslider was a part of the #1 top selling WordPress theme Avada. At the end of 2014, this theme alone reported 100,000+ users. Theme users might not even realize that the third-party RevSlider plugin was installed along with these themes. Of course, pirates also leveraged the popularity of these premium plugins and themes. Many webmasters ignored the moral- and security-related questions of using pirated software and installed nulled RevSlider on their sites. All in all, we estimate there were more than 1 million sites using the Slider Revolution plugin at the time of SoakSoak infection. One Can’t Simply Update Premium Software Now that we have an estimate of the plugin usage, let’s think about why hackers managed to massively exploit vulnerabilities that had been patched months before the attack. Updating is typically not straightforward for premium software. Patched versions are not available unless you are a paid user. Sometimes, only minor updates are free and you have to pay for major upgrades. In the case of Slider Revolution, upgrades from versions older than 4.1.5 couldn’t be called completely effortless, so some webmasters decided to stay with their current version. No Reliable Updates for Bundled Plugins At the time, third-party themes that used RevSlider didn’t include auto-updating functionality for bundled plugins. This meant that RevSlider could only be updated if the theme developers decided to include a new version of the plugin in their theme updates (which, again, could be paid for, unavailable, or just ignored). Ignoring new version releases for third-party software is quite common, even if they are free and easy to install. Lack of time and fear that the update will break something are the most common reasons. ThemePunch did not emphasize security fixes in new versions, so there had been no special incentive to upgrade the plugins (both for webmasters and theme developers). The combination of all the above-mentioned factors resulted in hundreds of thousands of websites using vulnerable versions of RevSlider in December 2014. Many of them used more than one year old versions <= 3.0.95. After the publication of RevSlider exploits, it was just a matter of time for hackers to come up with automated solutions to find and infect compromised sites. The SoakSoak campaign did that very efficiently — it’s scope exceeded most other malware infections that Sucuri has dealt with in the past 10 years. Here are the main takeaways from the SoakSoak infections: - Premium and closed-source software are not immune to hacker attacks. - Popular premium themes and plugins have massive user bases and hackers will always try to find a way to exploit them. - Timely updating for paid software is as important as updating the CMS itself, along with any free components. If you use a theme with bundled premium plugins, you rely on the theme developers for the plugin updates. - There are many reasons why you should not use pirated software. Not receiving security updates is one of them. To minimize risks associated with untimely software updates, websites should consider using web application firewalls that virtually patch most known — and not-yet-known — security holes. *** This is a Security Bloggers Network syndicated blog from Sucuri Blog authored by Denis Sinegubko. Read the original post at: https://blog.sucuri.net/2019/12/5-year-anniversary-of-the-soaksoak-malware-tsunami.html
With the rapid development and popularization of Internet of Things (IoT) devices, an increasing number of cyber-attacks are targeting such devices. It was said that most of the attacks in IoT environments are botnet-based attacks. Many security weaknesses still exist on the IoT devices because most of them have not enough memory and computational resource for robust security mechanisms. Moreover, many existing rule-based detection systems can be circumvented by attackers. In this study, we proposed a machine learning (ML)-based botnet attack detection framework with sequential detection architecture. An efficient feature selection approach is adopted to implement a lightweight detection system with a high performance. The overall detection performance achieves around 99% for the botnet attack detection using three different ML algorithms, including artificial neural network (ANN), J48 decision tree, and Naïve Bayes. The experiment result indicates that the proposed architecture can effectively detect botnet-based attacks, and also can be extended with corresponding sub-engines for new kinds of attacks. All Science Journal Classification (ASJC) codes - Analytical Chemistry - Atomic and Molecular Physics, and Optics - Electrical and Electronic Engineering
It is a tool used mainly for searching embedded files and executable code within another data file. $ sudo apt install binwalk $ binwalk -e <file-name> Here in the above image, we see that there is a 'jpg image' that has a compressed 'images' in it and we see that it is, it is embedded within the jpg image file. To extract it we can make use of a carving tool dd. It can carve out data from specific offsets that are passed as arguments to the tool along the with the file that needs to be read. Give the following command: $ dd if=deeper.jpg of=image1.jpg bs=1 skip=202 For more information about the tool, $ man binwalk
In phase one, we assemble a list of potential threats, to assess what mitigating measures we should be prepared to take. An example of this could be: To check whether a certain component is sensitive to DDoS attacks; and then, identify countermeasures that are planned in the development work. In the next phase, we do a static source code analysis of everything that is in development; so that our developers can quickly receive feedback on whether they have taken in a library, or produced code, that have known problems. Throughout the development cycle, we also receive suggestions on how code, or implementation processes, should be adapted; to be as secure as possible. When the component finally goes into operation, we have basic security protection protocol that ensures that we are GDPR-compliant; and that the right people access the correct data, at any given time. We also ensure that good encryption is in place; and that we have the perimeter protection needed, in order to be able to offer genuinely secure maintenance operations.
Consider the terms “cyber attacks”1 https://csrc.nist.gov/glossary/term/Cyber_Attack and” information and influence activities2Manheim, J., 2011. Strategy in information and influence campaigns. New York: Routledge.” These two terms were relatively infrequently used before the computer malware Stuxnet and the 2016 US presidential election3 https://www.dni.gov/files/documents/ICA_2017_01.pdf , respectively. Yet these events and the terms characterizing the emergence of new threats mark a threshold where a traditional government issue transcended into the commercial arena, and when nation-state actor capabilities became commercialized and publicly available. Each of these events, in its own way, forced commercial companies to change their security methodologies and postures to mitigate risk and control potential blowback stemming from these types of incidents. In the current post-Stuxnet era, there exists a much-expanded digital infrastructure, tremendous diversity in the types of threat actors and their motivations, and exponentially more capabilities that can be leveraged for substantial impact. Similarly, in a post foreign influence environment, information and influence activity is now not only a threat to western political bodies and their ideologies, but also to the commercial domain due to the proliferation of disinformation-as-a-service4 https://www.pwc.com/us/en/tech-effect/cybersecurity/corporate-sector-disinformation.html (DaaS) and related destabilizing offerings5https://www.gartner.com/en/documents/3974933/how-disinformation-as-a-service-affects-you. The cyber ecosystem established as a result of the digital infrastructure built post-Stuxnet was not designed to support the addressal of such malign information and influence activity. Hence, outside of recent advancements in detection, there is no consolidated solution to effectively counter the full scope and sophistication of malicious information and influence activity. Why do these events matter? Simply put, they provide illustrative examples of how new, cross-domain threats result from the emergence of novel cyber activities and the proliferation of related capabilities. It is only natural to wonder what domain might be next. In this post, an argument is made explaining the space sector’s unique vulnerabilities to such cross-domain threats. This post further explores how lessons learned from previous cross-domain catalysts can be applied in the space domain. The equivalent of a Stuxnet or foreign influence-like event in space would make space the third cross-domain issue in recent time to transcend from the government into the commercial arena. And while traditional nation-state actors, capabilities, and intents would again no longer remain under the purview of the government, anticipation of such an event can enable the identification of various commercial applications, as well as produce an unprecedented security posture to prevent foreign adversaries and threat actors from exploiting space as the next domain for malicious activity. Cyberattacks and information and influence activities provide critical insights into how both foreign adversaries and non-state threat actors will likely use space in nefarious ways to advance their agendas. These insights can shed light on how to monitor threat indicators; how to develop cyber and related (physical, etc.) security postures; and how novel assessment methods of key threat events may provide opportunities to mitigate risks while simultaneously advancing space technologies. This post views space as an emerging threat domain displaying early vulnerabilities to pernicious cyber activities, as well as a new vehicle to support advancements in a variety of fields. It also analyzes and discerns between foreign adversaries and threat actors. Specifically, foreign adversaries are nation-state actors advancing policy objectives through overt and covert means. Whereas, by comparison, threat actors include domestic entities, shadow proxies, and criminal enterprises engaging in activities against various sectors for financial or reputation gain. Why does Stuxnet matter? Understanding Stuxnet is critical to developing an understanding of how to anticipate, through assessment, the threat surfaces that the space domain introduces, and how to develop proactive strategies to mitigate its vulnerabilities. Stuxnet’s use against industrial infrastructure was the catalyst that both brought cyber to the forefront of the world as an attack mechanism and transformed it from a government priority to a global threat. 6 https://spectrum.ieee.org/the-real-story-of-stuxnet Stuxnet initiated a series of events (expansion of cyber threat landscape, awareness to cybersecurity, etc.) leading to the establishment of digital infrastructure reaching global audiences irrespective of geographic region, an aspect of information security not previously prioritized, and a springboard for today’s technology companies to monopolize digital communication and connection. Over the course of the last 10-15 years since Stuxnet, this digital infrastructure continues to exponentially evolve by increasing in scope, size, and utility. The quantity of commercial applications, companies, and cyber incidents continues to increase, as well as the sophistication and complexity of these activities (E.g. Colonial Pipeline ransomware attacks7 https://www.bloomberg.com/news/articles/2021-06-04/hackers-breached-colonial-pipeline-using-compromised-password , US State Department cyber attack8 https://www.infosecurity-magazine.com/news/us-state-department-cyber-attack/ ). Compounding this is that regulation and security are always second to innovation. In other words, it was not until recently that significant strides in cybersecurity were made from a regulatory and security perspective9https://www.cisa.gov/news/2021/08/05/cisa-launches-new-joint-cyber-defense-collaborative to position companies more effectively and authoritatively against threat actors. These strides help decrease the delta between threat actor impact and having the appropriate tools to defend against such threats. From the types of defensive tools and software to advancements in foreign threat actor analysis, companies can adhere to a much higher standard to protect their business models while leveraging the diverse digital infrastructure. Why does the 2016 U.S. presidential election matter? Like Stuxnet, Russia’s campaign to influence the outcome of the 2016 U.S. presidential election was an incident where a traditionally government-centric topic transcended into the commercial space. The primary difference this time was that the mature digital infrastructure that existed in a post-Stuxnet era was not built to detect, mitigate, anticipate, or respond to malicious information and influence activities. In addition, the delta between incident and capability development was significantly less than post-Stuxnet. In this instance, foreign adversaries and threat actors manipulated the digital infrastructure already established to launch successful malicious information and influence activities. The mediums to reach various target audiences already existed and were in place to deliver tailored messaging to change behavior and outcomes. How do threat actors evolve? Foreign adversaries’ and threat actors’ capabilities, modus operandi (MO), and methods continually evolve to advance their interests. Traditionally, this is a classic cat and mouse game as nation-state actors engage in espionage-like activities to inform their evolution. Expressly, as nation-state actors conduct covert and clandestine activities, it is always a race to detect and attribute the activity. However, there are certain instances where operations are discovered and tools or capabilities are compromised. Each time a compromise occurs, actors are forced to consider the potential risk of continued use of compromised capabilities and whether a change in their offensive posture is necessary. To avoid detection, adversaries may improve their tools, technique, and procedures (TTP) or MO. More importantly, nation-state actor tools have become more broadly known and available for commercial use. In each instance where a traditionally prioritized government topic (cyber, influence, etc.) transcends into the commercial space, the timeline of its otherwise natural evolution is compressed. There are countless instances where commercial entities uncover various threat actor tools, techniques, and capabilities. In these instances, and in that exact moment, threat actors lose their competitive advantage to send a phishing email, execute malware or spyware, or penetrate a network 10https://us-cert.cisa.gov/ncas/alerts/aa21-116a . This rapid expansion of discovery causes previously proprietary and sophisticated tools to become more commonplace. How does threat actor evolution transcend into the commercial sector? Foreign adversaries and threat actors must now position themselves with increasingly sophisticated capabilities and further prioritize the use of those capabilities given the higher chance of discovery. What does this exactly mean? This means that as the delta between commercial and government capability continues to decrease, the suite of tools and capabilities of non-government foreign adversaries and threat actors will increase in sophistication, incentivizing foreign government threat actors to innovate and reprioritize their efforts given the noisy digital battlefield. In both Stuxnet and the 2016 U.S. presidential elections, threat actor capabilities, TTP, and MO eventually transcended into the commercial space. This is critical to recognize because each time this type of activity occurs, the commercial world enhances their capabilities and foreign adversaries and threat actors lose a capability. Ultimately, foreign adversaries and threat actors are required to evolve and change their TTPs, MO, and capabilities as commercial entities attempt to predict where threat actor behaviors will trend11https://us-cert.cisa.gov/ncas/alerts/aa21-116a . Why is cybersecurity specific to space more important than ever? Security is always second to innovation. This dynamic must change in order to proactively protect infrastructure, institutions, and processes across industry and government. This means that companies must prioritize cybersecurity from inception and leverage best practices when building their solutions. This is especially important because space will be a domain with new types of infrastructure that foreign adversaries and threat actors can manipulate to advance their own agenda. With each commercial iteration of technology improvement, foreign adversaries and threat actors increase the number of ways to launch their capabilities in their proverbial toolbox. Foreign adversaries and threat actors continually hunt for pain points to identify and manipulate. This is no different with space. As such, implementing a robust security posture will serve multiple purposes. Firstly, robust security will help ensure that when a space infrastructure element is compromised, the damage is limited. Secondly, robust security will limit the foreign adversaries’ ability to utilize space infrastructure for covert and/or clandestine operations. Thirdly, more intentional security protections will help prevent threat actors from profiteering and using space infrastructure for nefarious purposes, including ransomware, spyware, and espionage. We are currently at a critical juncture to maintain a competitive advantage where, unlike before (e.g., pre-Stuxnet and preceding the 2016 US presidential election), we can leverage learned historical lessons to implement cybersecurity postures from inception for space-based technologies to prevent nefarious activities. How can we ensure the proper cybersecurity practices and standards are implemented to support innovation while balancing protection in space? There are two key constant themes that have emerged over the past two decades as government issues transcended into the commercial arena. One, there is a lack of true partnership between the industry and government, which leads to breakdowns in communication and a lack of fulsome insight. Two, there is a tremendous body of academic research on cybersecurity practices and standards with solutions that have not yet been implemented. This post identifies three primary ways to ensure the proper cybersecurity practices and standards are implemented to support innovation while balancing protection in space. • Lead by example. As new technologies are developed and advances in space infrastructure occur, the individuals at the helm need to lead by example. Establishing sound cybersecurity practices from inception and demonstrating a level of responsibility commensurate with the potential impact of these technologies is essential. Time and time again, major corporations and companies have been seen leading by negative example with mixed up priorities. Obviously, profits are a significant factor. However, companies now more than ever need to manage risk both from a proactive and reactive posture. Complex infrastructures, such as that for space, include too many shared dependencies that risk security, and therefore profit, for all industry and government entities; as such, a more collaborative, community-based approach is required. • Anticipate through assessment. Augmented intelligence12https://www.gartner.com/en/information-technology/glossary/augmented-intelligence is a growing expectation in the AI/ML field. To overcome challenges resulting from increasing amounts of data, subjectivity, and confirmation biases due to the human condition, and foreign adversary and threat actors continually evolving, the domestic posture needs to shift to anticipation through assessment. Studying foreign adversaries’ and threat actors’ past tendencies and histories illuminate which indicators to monitor to proactively protect critical assets and infrastructure. Space is no different. When looking at space as involving a new type of infrastructure to deliver services, it will inherently have multiple points threat actors attempt to exploit. • Quick to cauterize. The final piece is to accept that an attack or penetration is only a matter of time, and no company is immune. That said, it boils down to how quickly malicious activity can be detected; the quality and confidence of the data used to identify indicators to monitor; the capacity to conduct root cause analysis; and the ability to swiftly cauterize attacks and limit blowback. This is more of a mindset and realistic expectation to maintain. What about space ethics? Regulation is always second to innovation, and following regulation is ethics. Ethics specific to space might not be developed in a realistic timeframe unless a significant event occurs. With that said, there have been two previous moments in time where government issues transcended into the commercial space overnight, as well as past lessons learned can be used to inform the proper way to secure space infrastructure in a robust manner. There are a few foundational assumptions that the U.S. needs to make to support the development of a system of principles and rules regarding space behaviors. Both threat actors and foreign adversaries abide by their own rules and only play nicely when the outcome benefits their own self-driven interests. These same entities also leverage different types of infrastructure, including space, in illegal ways. These two assumptions will help determine that those who live within the letter of the law develop a standard set of norms specific to space to not only operate soundly, but also ensure a robust security posture exists to protect from malicious intent and activity. Space introduces new types of infrastructure, new types of vehicles to deliver information, new pathways to technological advancements, and new needs to support innovation. Furthermore, space as a government issue has not transcended fully into the commercial arena yet, meaning a significant catalyst has not yet forced the hand of commercial entities to change their current security postures. As we’ve seen with Stuxnet and the 2016 US presidential election, it took a significant event for commercial entities to reevaluate the importance of cyber and information and influence activity, issues the government prioritizes every day. Space is also one of those priorities. Since space exploration first began, space is, and will always be, a race to the finish. Who will get to the moon first? Who will get to Mars first? Who will colonize space first? The U.S. is proactively postured to develop and implement innovative techniques based on cybersecurity best practices to protect this new type of infrastructure. Foreign adversaries and threat actors will use space as another means to advance their self-interests. In order to protect national interests, stakeholders will need to prioritize cybersecurity from inception and anticipate through assessment understanding past practices, monitoring key indicators, and continually maintaining a competitive advantage. The views expressed in this article are based on the experiences of the individual authors and do not necessarily represent those of the Atlantic Council or the authors’ organizational affiliations.
|T.Manikandan1, S.Shitharth1, C.Senthilkumar2, C.Sebastinalbina, N.Kamaraj2 |Related article at Pubmed, Scholar Google| Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology Mobile AdHoc networks are self-configuring and self-organizing multi-hop wireless networks. A mobile Adhoc Network (MANET) is a collection of autonomous mobile users communicating over bandwidth constrained wireless links. Due to the mobile nodes, the topology of the network keeps changing unpredictably and rapidly over time. A selective black hole attack on MANET refers to an attack by a malicious node, which forcibly acquires the route from source to a destination by the falsification of sequence number and hop count of the routing message. As selective black hole is a node that can optimally and alternately perform a selective black hole attack or perform as a normal node. In this paper, we propose a method of activating the promiscuous mode and hence further data packet loss is prevented. Finally, we analyze the performance of the nodes after the inclusion of promiscuous mode. |Adhoc, Source routing, Malicious node| |A. Wireless Network Security| |Wireless mobile ad hoc network is a self-configuring network composed of several movable nodes. Ad hoc is a Latin word means “for this purpose”. Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. These mobile nodes communicate with each other without any infrastructure; furthermore all of the transmission links are established through wireless medium. Different protocols are then evaluated based on measures such as the packet drop rate, the overhead introduced by the routing protocol, end-to-end packet delays, network throughput etc. MANET is very popular because the application areas have the topological network that is changing frequently. MANET is more vulnerable than wired networks because of its mobile nodes. Already existing wired security solutions are not applied to MANET hence new proposals for the MANET security is always needed. MANET also has its own vulnerabilities such as Lack of centralized management, resource availability, scalability, cooperativeness, dynamic topology, and limited power supply and bandwidth constraint. In MANTE there is number of broad casting approaches such as Unicasting, multicasting, broadcasting and geocasting. In security issues they are generalized into two types of attacks. They are of External attacks and Internal attacks.| II. RELATE D WORKS |Selective black hole attacks always have an impact on routing algorithms and it uses sequence number to select the shortest route in routing protocols such as AODV or DSR. Generally, a selective black hole node is reduced to an appropriate extent in AODV protocol as referred by Can Erkin . The Justification is given by the concept of rejecting the first two RREP packets send to the source node because mostly the selective black hole node sends its RREP in one of the first two RREP to the source node. Hence its efficient in detectingBlack Hole Attack in AODV protocol.| |Dr.Sankarnarayanan proposed another efficient approach based on AODV protocol as; Usually a source do not send its RREP, only after receiving the first RREP. It waits until all the neighboring nodes to send their RREP. The source sends its reply to the node which has the distance of two from the source node. He also proposed another method to detect Cooperative Black hole attack based on the update of the fidelity level| |Initially, all nodes are provided with a fidelity level, and sends RREQ to all nodes.| |Then its select a node with higher fidelity level, that exceeds the threshold value to pass the packets. An ACK is send from the destination node and the source node add one to the fidelity level and it subtracts one if no ACK is received. That indicates the possibility of the presence of the black hole node and sense there may be a loss of data packets before it reaches the destination node. Nikayama et al. proposed a dynamic learning method to detect a selective black hole node. It is required to observe if the characteristic change of a node exceeds the threshold within a period of time. If yes, this node is judged as a selective black hole node, other-wise, the data of the latest observation is added into dataset for dynamic updating purposes. The characteristics observed in this method include the number of sent RREQs, the number of received RREPs and the mean destination sequence number of the observed RREQs and RREPs. However, it does not involve a detection mode, such as revising the AODV protocol or deploying IDS nodes, thus, it does not isolate selective black hole nodes. Luo et al. added an authentication mechanism into the AODV routing protocol, by combining hash functions, message authentication codes (MAC), and a pseudo random function (PRF) to prevent lack hole attacks.| |Djahel et al. proposed a routing algorithm based on OLSR (Optimized Link State Routing) to prevent the attack of cooperative selective black holes, by adding two control packets, namely 3 hop_ACK and HELLO_rep. Mahmood and Khan also surveyed recent research papers involving selective black hole attacks on MANETs, and described seven previous methods, and analyzed their advantages and disadvantages. In this paper, IDS nodes are deployed in MANETs to identify and isolate selective black hole nodes. An IDS node observes every node’s number of broadcasted RREQs, and the number of forwarding RREQs in AODV, in order to judge if any malicious nodes are within its transmission range. Once a selective black hole node is identified, the IDS node will send a block message through the MANET to isolate the malicious node .| III. REACTIVE (ON-DEMAND) ROUTING PROTOCOL |The reactive routing is equipped with another appellation named on-demand routing protocol. Unlike the proactive routing, the reactive routing is simply started when nodes desire to transmit data packets. The strength is that the wasted bandwidth induced from the cyclically broadcast can be reduced. Nevertheless, this might also be the fatal wound when there are any malicious nodes in the network environment. The weakness is that passive routing method leads to some packet loss. Here we briefly describe two prevalent on-demand routing protocols which are ad hoc on-demand distance vector (AODV) and dynamic source routing (DSR) protocol. AODV is constructed based on DSDV routing. In AODV, each node only records the next hop information in its routing table but maintains it for sustaining a routing path from source to destination node. If the destination node can’t be reached from the source node, the route discovery process will be executed immediately. In the route discovery phase, the source node broadcasts the route request (RREQ) packet first. Then all intermediate nodes receive the RREQ packets, but parts of them send the route reply (RREP) packet to the source node if the destination node information is occurred in their routing table. On the other hand, the route maintenance process is started when the network topology has changed or the connection has failed.| |The source node is informed by a route error (RRER) packet first. Then it utilizes the present routing information to decide a new routing path or restart the route discovery process for updating the information in routing table. The design idea of DSR is based on source routing. The source routing means that each data packet contains the routing path from source to destination in their headers. Unlike the AODV which only records the next hop information in the routing table, the mobile nodes in DSR maintain their route cache from source to destination node. In terms of the above discussion, the routing path can be determined by source node because the routing information is recorded in the route cache at each node. However, the performance of DSR decreases with the mobility of network increases, a lower packet delivery ratio within the higher network mobility.| IV. PROPOSED M ETHODOLOGY |Our IDS model is based on the following assumptions. (a) All the nodes are identical in their physical characteristics. If node A is within the transmission range of node B, then node B is also within the transmission range of A. (b) Also our solution assumes that all the nodes are authenticated and can participate in communication, i.e., all nodes are authorized nodes.(c) The source node, destination node and IDS nodes are taken as trusted nodes by default. (d) All the IDS nodes are set in promiscuous mode only when needed, and an IDS node will always be neighbor to some other IDS node. (e) Since there are multiple routes from a source to destination, the source node has to cache the other routes to mitigate the overhead incurred during new route discovery process.| |ï· Protocol Description| |ï· Selective black hole Discovery Process| |ï· Performance analysis| |A. Protocol Description| |The Ad-hoc on demand routing is like all reactive protocols, is that topology information is only transmitted by nodes on-demand. When a node wishes to transmit traffic to a host to which it has no route, it will generate a route request (RREQ) message that will be flooded in a limited way to other nodes. This causes control traffic overhead to be dynamic and it will result in an initial delay when initiating such communication. A route is considered found when the RREQ message reaches either the destination itself, or an intermediate node with a valid route entry for the destination. For as long as a route exists between two endpoints, AODV remains passive. When the route becomes invalid or lost, AODV will again issue a request.| |RREQ - A route request message is transmitted by a node requiring a route to a node.| |RREP - A route reply message is unicasted back to the originator of a RREQ if the receiver is either the node using the requested address, or it has a valid route to the requested address. The reason one can unicast the message back, is that every route forwarding a RREQ caches a route back to the originator.| |RERR - Nodes monitor the link status of next hops in active routes. When a link breakage in an active route is detected, a RERR message is used to notify other nodes of the loss of the link. In order to enable this reporting mechanism, each node keeps a “precursor list”, containing the IP address for each its neighbors that are likely to use it as a next hop towards each destination.| |B. Selective black hole Discovery Process| |A selective black hole problem means that one malicious node utilizes the routing protocol to claim itself of being the shortest path to the destination node, but drops the routing packets but does not forward packets to its neighbors. A single selective black hole attack is easily happened in the mobile ad hoc networks. An example is shown as Figure, node 1 stands for the source node and node 4 represents the destination node. Node 3 is a misbehavior node who replies the RREQ packet sent from source node, and makes a false response that it has the quickest route to the destination node. Therefore node 1 erroneously judges the route discovery process with completion, and starts to send data packets to node 3. As what mentioned above, a malicious node probably drops or consumes the packets. This suspicious node can be regarded as a selective black hole problem in MANETs. As a result, node 3 is able to misroute the packets easily, and the network operation is suffered from this problem.| |C. Promiscuous mode| |Promiscuous mode is a mode for a wireless network interface controller (WNIC) that causes the controller to pass all traffic it receives to the central processing unit (CPU) rather than passing only the frames that the controller is intended to receive. This mode is normally used for packet sniffing that takes place on a router or on a computer connected to a hub (instead of a switch) or one being part of a WLAN.| |D. Performance Analysis| |To analyze the performance of proposed Routing protocol is by considering packet delivery ratio, collision rate and delay. The result shows that the proposed protocol improves the above mentioned constraints. ���� =number of packets forwarded to destination node by the source node. ���� =probability of packets received. ���� = number of of packets received| V. FLOW CHART VI. EXPERIMENTAL SETUP AND RESULT ANALYSIS |Network Simulation 2 is applied in this paper for the detection and isolation of selective black hole nodes. In the area 1000 1000 m, 75 normal nodes executing the AODV routing protocol were randomly distributed, and few malicious nodes, to perform selective black hole attack. Randomly chosen pairs for data communication send 5 kb UDP-CBR per second. Speed of the nodes moving in a range between 0 and 20m/s. Pause times of the nodes are of 0 s, 5 s, 10 s and 15 s were considered. Pause time is defined as the time taken by node to move from one place to another| |ï· Packet Drop Ratio is defined as the total number of data packets dropped by the malicious nodes or due to any congestion among the nodes.| |ï· Overhead (bit/s): Amount of traffic prevails over the network by our approach.| |ï· End to end delay (s): It’s the time elapsed between time when the source node is triggered off to the time the destination node receives.| |ï· Formula used to detect the probability of the malicious node is| |ï· Pm = Na / (Nc + Na)| |1. Number of detected cooperation (Nc)| |2. Number of detected attacks (Na)| |3. Probabilities that the node is a malicious node (Pm).| |A. Packet drop ratio| |The packet loss rate of AODV under attack without the application of promiscuous mode is about 40% while the packet loss rate of AODV with promiscuous mode was approximately 30% reduced by 10%. The packet loss rate of AODV was approximately 25% in the approach which was which was decreased by 5% when compared to our scheme.| |PM -Promiscious Mode| |NPM- Non Promisious Mode| |B. Overhead ratio| |This is the ratio of transmissions like RREQ, RREP and RERR. Some routing packets like RREQ and QUERY packets are broadcast to all neighbors and packets like RREP and RRER travel along only in a single path. The Control Packet overhead ratio in our approach was approximately 40% and it is decreased by 5% when compared to the overhead ratio of AODV which was approximately 45%. But still our approach leads in a better way in control overhead ratio when compared with the approach that is having 60% overhead ratio.| |C. End to End delay| |Compared to the approach in our end-to-end delay is quite better. Usually the end-to-end delay is increased with higher the possibility of malicious nodes. We avoid the overhead in the system by avoiding the frequent checking of the malicious nodes that causes selective black hole attack. As the overhead is decrease, involuntarily the end-to-end delay is decreased. As for our approach is concerned, we implement promiscuous mode as soon the malicious node is detected. So it will avoid further data loss and our IDS nodes isolate the malicious nodes so that we no need to check frequently for the malicious nodes .Obviously, our overhead is decreased as such the end-to-end delay.| |The amount of data transferred from one place to another or processed in a specified amount of time. Usually, the throughput value is indirectly proportional to the packet loss. AODV with the activation of Promiscuous mode always show good throughput value since it loss data packets in less rate. In our comparative graph throughput increases as the no.of.node increases but after a break point it drops from 20% to 50% gradually.| |Mobile Ad-Hoc Networks has the ability to deploy a network where a traditional network infrastructure environment cannot possibly be deployed. In our approach, we have analyzed the behavior and challenges of security threats in mobile Ad-Hoc networks and implemented the promiscuous mode in a better way. Although many solutions have been proposed but still these solutions are not perfect in terms of effectiveness and efficiency. If any solution works well in the presence of single malicious node, it cannot be applicable in case of multiple malicious nodes. After referring many approaches, applying promiscuous mode after the detection of selective black hole attack would surely decrease the rate of loss in data packet. More ever, the promiscuous mode is applied only for nodes that were attacked rather for applying for all the nodes. Hence loss of energy is surely avoided. In future, we enhance our work to stop even the initial data packet loss by applying the promiscuous mode to Proactive routing protocols.| | ShilaDevuManikantan, Cheng Yu, Anjali Tricha Channel-aware detection of selective black hole attacks in wireless mesh networks.In: IEEE global telecommunications conference, December 2009. P. 1-6. Nasser and Y. Chen, “Enhanced Intrusion monitoring nodes with selection of Malicious nodes in mobile ad hoc networks,” in Proc. IEEE Int.Conf. on Communication (ICC’07), June 2007, pp. 1154- 1159. SemihDokurer, Y.M. Erten, Can ErkinAcar, “Performance Analysis of Ad-hoc Networks under Selective black hole Attacks”, in: Proc. of the IEEE southeastCon, pp.148-153, 2007. SoufineDjahel, FaridNait-Abdesselam, AshfaqKhokhar, “An Acknowledgement-Based Scheme to Defend Against Cooperative Black Hole Attacks in Optimized Link State Routing Protocol”, in:Proc, of the IEEE International Conference on Communications (ICC), pp. 2780-2785, 2008. SoufineDjahel, FaridNait-Abdesselam, AshfaqKhokhar, “An Acknowldgement- Based scheme to Defend Against Cooperative Black hole Attacks in Optimized Link State Routing Protocol”, in: Proc. of the IEEE International Conference on Communications (ICC). Pp 2780-2785, 2008. A. Hasswa, M.Zulker, and H.Hassanein, “Routeguard: an intrusion detection and response system for mobile ad hoc networks,” Wireless and Mobile Computing, Networking and Communication, vol.3, August 2005, P336-343. M.K.Rafsanjani, A.Movaghar, “Identifying monitoring nodes with selection of Authorized nodes in mobile ad hoc networks,” World Applied Sciences Journal, Vol.4, no3, pp. 444- 449, 2008. R.A.RajaMahmood, A.I. Khan, “A survey on Detecting Selective black hole Attack in AODV-based Mobile Ad Hoc Networks”, in: Proc. of the International Symposium on High Capacity Optical Networks and Enabling Technologies (HONET), pp. 1-6, 2007. Satoshi Kurosawa, Hidehisa Nakayama, Nei Kato, Abbas Jamalipor, Yoshiaki Nemoto, Detecting blackhole attack on AODVbased Mobile Ad Hoc Networks by Dynamic learning method, International Journal of Network Security 5 (3), pp. 338-346, 2007. N. Komnios, D. Vergados, and C. Douligeris, “Detecting unauthorized and compromised nodes in mobile ad hoc networks,” Elsevier Ad hoc network, vol. 5, No. 3, pp. 289-298, 2007. LathaTamilselvan, Dr.V.Sankarnarayanan, “Prebention of cooperative selective Black hole attack in MANET, Journal of Networks 3(5), pp.13-20, 2008. SemihDokurer, Y.M. ErtenAcar, “ Performance Analysis of Adhoc Networks Under Selective Black hole Attacks”, in: Proc: of the IEEE SoutheastCon, pp.148-153, 2007. Latha Tamil selvan, Dr.V.Sankarnarayanan, “Prevention of Blackhole Attack in MANET”, in: Proc. of the International Conference on Wireless Broadband and Ultra Wideband Communication, 2007. S. Xu, “Integerated Prevention and Detection of Byzantine Attacks in Moble Ad Hoc Networks”, PhD thesis, PhD in Computer Science, The University of Texas at San Antonio, 2009. Cheng Bo-zchao, Tseng Ryh-Yuh A context adaptive Intrusion detection System for MANET, Computer Communication 2011:34:310-8. Yao Yu, Guo Lei, Wang Xingwei, Liu Cuixiang Routing security scheme based on reputation evaluation in hierarchical ad hoc networks. ComputNetw 2010:54:14609. Karlof C, Wagner D. Secure routing in wireless sensor networks: attacks and countermeasures. Elsevier’s Ad hoc Networks J September 2003:1(2-3):293-315 [Special Issue on Sensor Network Applications and Protocols]. Xiao B, Yu B, Gao C. CHEMAS: Identify suspect nodes in selective forwarding attacks. J Parallel Distributed Comput2007:67(11):1218-30. XiaopengGao, Wei Chen. A novel selective black hole attack detection scheme for Mobile ad-hoc networks. In: IFIP international conference on network and parallel computing workshops, 2007. p. 209-14. Wang Shun-Sheng, Yan Kuo-Qin, Wang Shu-Ching. An optimal solution for Byzantine agreement under a hierarchical clusteroriented mobile ad hoc network ComputElectrEngJanusry 2010:36(1):100-13. Sukla Banerjee. Detection/removal of cooperative and selective black hole attack In mobile ad-hoc networks. World CongrEngComputSci 2008:337-42. T.Anantvalee and J. Wu. ”A Survey on Intrusion Detection in Mobile Ad Hoc Networks”, Book Series Wireless Network Security, Springer, pp. 170 – 196, ISBN:978-0-387-28040-0 (2007). Ming-Yang Su. Prevention of selective black hole attacks on mobile ad hoc networks through intrusion detection systems. ComputCommun 2010.
A firewall is the best possible protection you have against malicious individuals (hackers) and malware (software made to steal or damage your data or device). Sometimes they can be a bit of a pain to set up correctly, especially if you have to do it on multiple devices. So can you leave them off for devices connected to your private network? A firewall should always be set up on a private or public network. This includes having a firewall setup on all the devices. This is because either for scenarios that are unintentional or intentional, a firewall is the only protection your device has. Due to it being configurable, there is no reason that it should not be set up on any network or device. This article will briefly cover what a firewall is, what it does, and how it works, along with what a private network is. With a clear understanding of these elements, we will ultimately determine if all networks, including private or public, should have firewalls in place and on their connected devices. What is a firewall, what is it used for? A firewall is a computer system that is designed to prevent any and all unauthorized access from entering a private network (your computer). It works by filtering the information that comes in from the internet, blocking the unwanted traffic (data), and filtering in the wanted data. Hence we can say that a firewall’s purpose is to create a form of a safety barrier between a private network and the public internet. A firewall exists because there will always be individuals (hackers) and malware that will be trying to get onto your system with the purpose of malicious intent. This can range from stealing your keystrokes and login information to halting your system entirely. How does a firewall work? A firewall works by filtering the incoming internet data, and it will use its rules to determine if that data is allowed to enter the network. These rules are also typically known as an access control list. These rules are customizable and can be determined and set by the network administrator. The network administrator will decide what type of data can enter the network and what type of data will be allowed to exit the network. These principles are known as “allow” or “deny” permissions. For example, one instance where a firewall would be applicable is to block specific IP addresses from accessing the network. This means that any data trying to be sent from that particular IP address would not be allowed through the firewall. How does a firewall make its rules? Our example above dictated one rule that a firewall utilizes in order to keep a network safe. However, this is not the only rule that a firewall can create to keep malicious users and data from entering the network. Firewalls can create rules in accordance with; - IP addresses - Domain Names Our example above dictated rules for IP addresses, but a firewall can allow and deny data based on any of these elements that we listed above, and it can have multiple rules, based on some, many, or all of these elements. Does everyone need a firewall? If using a piece of technology that connects to the public internet, everybody in the world should use and definitely needs a firewall. This is especially true for large corporations and organizations. Large companies and organizations have many devices and systems that access their private network, and if the entire network did not have a firewall, any number of people with malicious intent could gain access to their private data. Is it better to have a firewall on or off? Having a firewall on in most cases is always the better and safer option for devices connected to the internet. In some instances where one user is using multiple devices, they may choose to turn off particular firewalls for particular reasons. However, a firewall has settings (as we discussed) that give you the option to allow specific data to move through the firewall. This means you can go into your firewall and allow certain programs or systems to have access or deny them access to other networks and the internet. With this ability, it is always better to have a firewall turned on. What is a private network? A private network is where one or many digital devices have a connection to a specific network wherein restrictions are established with the hope of enforcing and promoting a secure environment amongst those digital devices. This type of network can and sometimes will be configured in a specific way as not to allow other devices (external devices) to access the network. Furthermore, only a specified and select set of digital devices will be able to access this type of network depending on the specified network settings. Should Your Firewall Be On For A Private Network? Now that we know precisely what a firewall is, how it works and what it does, and we know what a private network is, we can discuss if you really need a firewall for a private network? It would seem that if you are behind a firewall that blocks malicious access from the internet, then you do not need a private network firewall? If you are using a home network (a private network) and perhaps there are two or three devices connected to your router, and you alone use them all, maybe there would be no need for a private firewall on all the devices. This is because you alone know what is being put on the machines and how they are being utilized with regards to the internet. In some instances, a private network firewall could be a nuisance if you are constantly moving data from one device to the next and are always halted by a firewall sign-in screen. But this is again only if you know precisely what type of data is being transferred between devices and you alone use them. In this scenario and others similar to it, perhaps you would not need private network firewall restrictions. The problem comes in when there are different users on various devices, and those devices have access to your device which has no firewall in place on a private network. This scenario can relate to a home network or even a companies private network. Imagine you had a user that unintentionally put malicious malware onto their device. This malware could potentially infect your device and cause much harm and damage. Even in the case where you alone are using multiple devices, you may unintentionally load malware or give access to something that you should not have. Larger networks even have sub-private networks that do not allow access to other devices and networks. For example, a large company will have many private networks with firewalls to restrict access to confidential and vital data. One section of the companies network, for example, the HR department, will not have access to the IT department’s network. Hence, it would be best if you always had a firewall on all devices on all networks, public or private, in all situations. We determined that a firewall is a computer system with the sole purpose of restricting data amongst devices and the internet. It is in place because there will always be individuals with malicious intent trying to disrupt, steal, and destroy data. We also determined that a firewall has rules that can be set by the network administrator and by yourself if it is your own home network to “allow” or “deny” the sending or receiving of specific data based on specific criteria. With firewalls having this feature, we concluded that it is always best to have a firewall on even if you are using a private network because there sometimes may be instances where a user on the private network unintentionally loads malware onto their system, and it may then have access to your device. Even if you are the only person using multiple devices connected to a private network, it would always be best if you set the restrictions for each firewall on those specific devices allowing you safe and secure access when you need it.
From Windows Central: https://www.windowscentral.com/petya-ransomware-windows There’s another massive ransomware attack sweeping across the world. Here’s what you need to know to stay safe. Little more than a month has passed since the notorious WannaCry ransomware attack hit headlines across the world. Now, sadly, we’re in a period of another such attack, and this time it’s dubbed “Petya” or “GoldenEye.” The basic problem is the same as the WannaCry outbreak: PCs are infected, locked up and files encrypted with a ransom demanded for access to the blocked files. It’s not exactly the same as WannaCry, nor is it currently as widespread, but it’s still important to know what you’re dealing with. What is Petya? Petya is a piece of ransomware that infects computers with the intent of monetary extortion in return for access to the contents of the PCs. It encrypts files, claiming only to let you back in upon receipt of a ransom. Which platforms does it affect? It’s a Windows-only affair, and Microsoft already released a patch in March that should protect users, assuming it’s installed. How does Petya spread? Petya tries to infect PCs using two methods, moving on to the second if the first fails. Once again, as with WannaCry, Petya utilizes the leaked EternalBlue exploit first developed by American security services. If that fails because the system has been properly patched, for example, it moves on to the second method, which is to use two Windows administrative tools. Unlike WannaCry, Petya looks to spread within local networks without seeding itself externally, perhaps limiting its early global impact somewhat. As reported by The Guardian, there is a secondary “vaccine” that may prevent infection on a specific PC, but it leaves Petya free to try and spread to others: For this particular malware outbreak, another line of defence has been discovered: ‘Petya’ checks for a read-only file, C:\Windows\perfc.dat, and if it finds it, it won’t run the encryption side of the software. But this “vaccine” doesn’t actually prevent infection, and the malware will still use its foothold on your PC to try to spread to others on the same network. Read the full article HERE!
Although the Heartbleed vulnerability allowed for credential theft on an unprecedented scale, account compromises have long been of significant concern to security operations. Even though an organization may not have directly implemented systems vulnerable to Heartbleed, users sharing account names and passwords across applications could easily have had their credentials stolen from a separate, vulnerable site. To detect a malicious actor using stolen credentials to log in to your organization’s network or web applications, LogRhythm includes a set of purpose-built Advanced Intelligence Engine rules. Because the LogRhythm Labs team is in the process of renaming and reorganizing AIE Rules, the rule id, which will not change, will be included with the current rule name in order to keep this post relevant. The most basic form of catching a malicious login is to blacklist or whitelist certain source locations for remote log ins. Three rules, Susp:Inbound:Connection With Blacklisted Country (464), Susp:Inbound Connection With Non-Whitelisted Country (467), and Ext:Acnt Comp:Remote Auth From Unauthorized Location (6) are very easily to implement, yet are very effective at detecting compromised accounts. Obviously, these rules will not trigger when the malicious actor happens to be outside of a blacklisted area or within a white list one, but it certainly narrows down possible breach points. The second group of rules detect authentications for the same account across disparate geographic areas. For example, a user typically shouldn’t be logged in from both Denver and London at the same time. The rules, Ext:Acnt Comp:Concurrent Auth From Multiple Cities (39), Ext:Acnt Comp:Concurrent Auth From Multiple Regions (4), and Ext:Acnt Comp:Concurrent Auth From Multiple Countries (5), will detect malicious actors logging into accounts already in use. Finally, attackers that have stolen credentials that include the organization’s domain may attempt authentication, even if the user wisely uses a different password. In this case, many authentication failures should be observed. If rules such as Ext:Acnt Atck:Account Scan On Single Host (8) and Ext:Acnt Atck:Brute Force From A Single Origin Host (2) are triggered, the organization can identify users who have a compromised external account. And if the authentications failures are followed with a successful login, rules such as Ext:Acnt Comp:Account Scan On Single Host (7) and Ext:Acnt Comp:Brute Force From A Single Origin Host (1) will identify account compromises. To reiterate, large account breaches, even if not experienced directly by an organization, can still easily lead to security breaches. Monitoring accounts can allow for much easier mitigation and remediation in the event of an account compromise and should be considered standard practice for even smaller-scaled security operations.
470. Many thanks for giving the Secretariat an opportunity to provide an update of IP-related issues as they have come up in the most recent trade policy reviews. As at previous sessions, this update concentrates just on those aspects of IP-related trade policy matters that other Members actively chose to address in the course of the review of Members' trade policy settings. Since the last TRIPS Council meeting in October 2017, the trade policy reviews of the West African Economic and Monetary Union3, the Plurinational State of Bolivia, Cambodia, The Gambia, Malaysia and Egypt have taken place. 471. These recent reviews covered a wide range of IP-related trade policy matters; a full account of these matters would be particularly lengthy. Following, therefore, for the sake of brevity, is a representative list of some of the areas on which other members have expressed concrete interest in the form of follow-up questions during these reviews: • incentives for firms to acquire foreign owned IP rights; • mechanisms to foster trading in IP rights and for realising their financial value; • collective management of licenses on copyright content; • reduction of "red tape" associated with obtaining and protecting IP rights; • policies of exhaustion of IP rights, specifically patents, trademarks, copyright and plant varieties; • procedures for registration of trademarks and for opposition to trademark registration; • geographical indications; • compulsory licenses on pharmaceutical patent; • acceptance and domestic implementation of the Protocol Amending the TRIPS Agreement; • a constitutional right to access to medicines in relation to the patent system; • protection of undisclosed information; • enforcement of IP rights online and at the border; • ex officio authority to take enforcement action in cases of infringement of business and entertainment software copyright; • administrative measures for the enforcement of IP rights; • the structure, hierarchy and competence of specific courts on IP matters within the domestic judicial system; • the function of a registry of importers as a mechanism in supporting the enforcement of IP rights; • cooperation agreements between completion and IP authorities; regional IP organizations; and ratification of WIPO Treaties (including Marrakech) and accession to the UPOV Convention. 472. The IP section in the latest WTO Director-General Monitoring Report4, issued in mid-November 2017, reported on the entry into force of the Protocol Amending the TRIPS Agreement; and highlighted certain trade-related IP policy initiatives undertaken by Canada, China, Paraguay and South Africa.
'ChewBacca' Malware Taps Tor Network 18 Dec 2013 InformationWeek, By Mathew J. Schwartz The next Star Wars film may not be scheduled to arrive until the summer of 2015, but the marketing tie-ins have already begun -- at least when it comes to cybercriminals trying to make a fast and fraudulent buck Security researchers have spotted a Tor-using banking Trojan that's been dubbed "ChewBacca" by its creators. According to Kaspersky Lab, which discovered the malware on an underground cybercrime forum, once the malware (detected as a file named "Fsysna.fej") successfully infects a PC, it also drops a copy of Tor 0.2.3.25 for the malware to use. The Trojan then logs all keystrokes and sends the data back to the botnet controllers via Tor. Read more.
How to Catch Data Exfiltration with Machine Learning Why is Detecting Data Exfiltration of Utmost Importance? In today's landscape, there is an unprecedented surge in ransomware attacks and data breaches aimed at coercing businesses. Concurrently, the cybersecurity industry is confronted with numerous critical vulnerabilities within database software and corporate websites. These developments paint a grim picture of data exposure and unauthorized data removal that security leaders and their teams are contending with. This article sheds light on this challenge and elaborates on the advantages offered by Machine Learning algorithms and Network Detection & Response (NDR) methodologies. Data exfiltration frequently marks the concluding phase of a cyberattack, representing the final chance to identify the breach before the data becomes public or is exploited for nefarious purposes like espionage. Nevertheless, data leakage isn't solely a result of cyberattacks; it can also occur due to human errors. While it's ideal to prevent data exfiltration through robust security measures, the increasing complexity and widespread distribution of infrastructures, combined with the integration of outdated devices, render prevention a challenging endeavour. In such situations, detection functions as our ultimate safeguard – indeed, it's better to detect it late than not at all. Confronting the Difficulty of Detecting Data Exfiltration Perpetrators can take advantage of multiple security vulnerabilities to collect and illicitly transfer data, utilizing protocols such as DNS, HTTP(S), FTP, and SMB. The MITRE ATT&CK framework delineates numerous patterns of data exfiltration attacks. Nevertheless, staying current with each protocol and infrastructure alteration is an imposing challenge, adding complexity to the pursuit of comprehensive security monitoring. What is required is a tailored analysis based on the volume of data, specific to devices or networks, with adjusted thresholds to enhance effectiveness. This is where Network Detection & Response (NDR) technology comes into play. NDR powered by machine learning offers two significant capabilities: - It enables practical monitoring of all relevant network communications, serving as the foundation for comprehensive data exfiltration monitoring. This includes not only interactions between internal and external systems but also internal communications. Some attacker groups transfer data directly outside, while others utilize dedicated internal exfiltration hosts. - Machine learning algorithms play a pivotal role in adapting and learning context-specific thresholds for different devices and networks, which is crucial in the current diverse landscape of infrastructure. Unravelling Machine Learning for Data Exfiltration Detection Before the advent of Machine Learning, the process involved manual configuration of thresholds specific to networks or clients. Consequently, an alert would be triggered if a device exceeded the predefined data threshold when communicating outside the network. However, the introduction of Machine Learning algorithms has ushered in several advantages for data exfiltration detection: - Acquiring knowledge of network traffic communication patterns and the upload/download behavior of clients and servers, providing a crucial foundation for identifying anomalies. - Establishing appropriate thresholds tailored to various clients, servers, and networks. Managing and defining these thresholds for each network or client group would otherwise be a laborious task. - Recognizing deviations in learned volume patterns, thereby detecting outliers and suspicious data transfers, whether they occur internally or involve exchanges between internal and external systems. - Utilizing scoring systems to quantify exceptional data points, establishing connections with other systems to assess the data, and creating notifications for detected irregularities. Visualization: When the traffic volume surpasses a certain threshold, as determined by the learned profile, an alert will be triggered. Machine Learning-Powered Network Detection & Response Comes to the Rescue Network Detection & Response (NDR) solutions offer a holistic and insightful approach to identifying unusual network behaviors and sudden spikes in data transfer. By harnessing the capabilities of Machine Learning (ML), these solutions create a foundation for network communication patterns, enabling the rapid detection of anomalies, where it pertains to volume analysis or covert channels. With this advanced and proactive approach, NDRs can identify the earliest indicators of intrusion, often well in advance of any data exfiltration occurrence. The ExeonTrace platform: Data Volume Outlier Detection A standout NDR solution known for its meticulous data volume monitoring, is ExeonTrace. Developed in Switzerland, this NDR system harnesses award-winning Machine Learning algorithms to passively scrutinize and assess real-time network traffic, pinpointing potential instances of risky or unauthorized data transfer. Notably, ExeonTrace seamlessly integrates with your current infrastructure, eliminating the need for additional hardware agents. The benefits of ExeonTrace go beyond just enhancing security; it also contributes to a deeper understanding of normal and unusual network activities, a pivotal aspect in fortifying and optimizing your overall security framework. ML in Network Detection: Key Elements In the contemporary digital landscape, network expansion and heightened vulnerabilities are constant challenges. Consequently, robust data exfiltration detection is imperative. However, given the intricacy of modern networks, manually establishing thresholds for outlier detection can be not only burdensome but also nearly impractical. By employing volume-based detection and monitoring traffic behaviors, one can spot data exfiltration by identifying deviations in data volume and upload/download traffic patterns. This underscores the potency of Machine Learning (ML) within Network Detection & Response (NDR) systems, automating the recognition of infrastructure-specific thresholds and anomalies. Among these NDR solutions, ExeonTrace distinguishes itself by offering comprehensive network visibility, efficient anomaly detection, and a bolstered security posture. These attributes ensure that business operations can proceed securely and efficiently. To explore how ML-powered NDR can enhance data exfiltration detection and identify irregular network behaviors for your organization, we invite you to request a demonstration. Head of Professional Services
A sophisticated malware family has enough code similarities to indicate that it shares a common origin with SlemBunk.Read more... Entries filed under 'Android' Threat Research Blog The FireEye Labs team posts blog entries under threat research to present and discuss cyber attacks and threat intelligence from a technical perspective. They cover the full spectrum of exploits and vulnerabilities, including advanced malware and targeted threats. A Growing Number of Android Malware Families Believed to Have a Common Origin: A Study Based on Binary CodeMarch 11, 2016 5:08 PM By Wu Zhou, Junyuan Zeng, Jimmy Su, Linhai Song | Threat Research, Advanced Malware May 5, 2016 8:00 AM By Jake Valletta | Vulnerabilities, Threat Research A vulnerability present on Android devices allows a seemingly benign application to access sensitive user data: including SMS and call history and the ability to perform potentially sensitive actions such as changing system settings or disabling the lock screen. April 26, 2016 8:30 AM By Wu Zhou, Jimmy Su, Yong Kang , Deyu Hu | Mobile Threats, Threat Research Smishing (SMS phishing) offers a unique vector to infect mobile users. FireEye Labs recently discovered a RuMMS campaign, which threat actors are using to distribute their malware. They are using shared-hosting providers, which adds flexibility to the threat actor’s campaign and makes it harder for defending parties to track these moving targets.Read more... August 19, 2015 11:15 AM By Mariam Muntaha, Fuaad Ahmad, Jimmy Su | Mobile Threats, Threat Research January 20, 2015 3:10 PM By Vishwanath Raman, Yulong Zhang, Adrian Mettler, Malte Isberner | Mobile Threats, Threat Research FireEye analyzed the most popular Android applications available in the Google Play store and found that a significant number of them do not encrypt sensitive data with strong cryptography, leaving them vulnerable to hackers.
There are a bunch of new SDL resources available on the Microsoft Security Development Lifecycle page. For every step in the software development process (Requirements, Design, Implementation, Verification, Release) there are tools and/or training videos available. For a video giving an overview of the SDL tools, click here. - SDL Process Template for Visual Studio Team System 2008 - MSF-Agile + SDL Process Template for Visual Studio Team System 2010 - MSF-Agile + SDL Process Template for Visual Studio Team System 2008 SDL Threat Modeling Tool For more information on the treat modeling tool, click here. FxCop analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. For more information, click here. Watch the video here. Anti-Cross Site Scripting Library This is specifically designed to help mitigate the potential of Cross-Site Scripting (XSS) attacks in web-based applications. Watch the video here. Microsoft Code Analysis Tool .NET CAT.NET is a binary code analysis tool that helps identify common variants of certain prevailing vulnerabilities that can give rise to common attack vectors such as Cross-Site Scripting (XSS), SQL Injection, and XPath Injection. Watch the video here. BinScope Binary Analyzer BinScope Binary Analyzer is a verification tool that analyzes binaries to ensure that they have been built in compliance with the SDL requirements and recommendations. Watch the video here. SDL MiniFuzz File Fuzzer MiniFuzz is a basic testing tool designed to help detect code flaws that may expose security vulnerabilities in file-handling code. Watch the video here. Application Verifier is a runtime verification tool for native code that assists in finding subtle programming errors that can be difficult to identify with normal application testing. For more information, click here. SDL Regex Fuzzer SDL Regex Fuzzer is a verification tool to help test regular expressions for potential denial of service vulnerabilities. Watch the video here. Attack Surface Analyzer Beta Attack Surface Analyzer is a tool that highlights the changes in system state, runtime parameters and securable objects on the Windows operating system. The release resources are the same templates and videos as the ones in the Requirements section.
FakeNet is a Windows Network Simulation Tool that aids in the dynamic analysis of malicious software. The tool simulates a network so that malware interacting with a remote host continues to run allowing the analyst to observe the malware’s network activity from within a safe environment. The goal of the project is to: Be easy [...] Tag Archive | "dynamic malware analysis" Hook Analyser is a freeware application which allows an investigator/analyst to perform “static & run-time / dynamic” analysis of suspicious applications, also gather (analyse & co-related) threat intelligence related information (or data) from various open sources on the Internet. Essentially it’s a malware analysis tool that has evolved to add some cyber threat intelligence features [...] Malware Analyser is freeware tool to perform static and dynamic analysis on malware executables, it can be used to identify potential traces of anti-debug, keyboard hooks, system hooks and DEP setting change calls in the malware. This is a stepping release since for the first time the Dynamic Analysis has been included for file creations [...]
AbstractAmong the more typical forensic voice comparison (FVC) approaches, the acoustic-phonetic statistical approach is suitable for text-dependent FVC, but it does not fully exploit available time-varying information of speech in its modelling. The automatic approach, on the other hand, essentially deals with text-independent cases, which means temporal information is not explicitly incorporated in the modelling. Text-dependent likelihood ratio (LR)-based FVC studies, in particular those that adopt the automatic approach, are few. This preliminary LR-based FVC study compares two statistical models, the Hidden Markov Model (HMM) and the Gaussian Mixture Model (GMM), for the calculation of forensic LRs using the same speech data. FVC experiments were carried out using different lengths of Japanese short words under a forensically realistic, but challenging condition: only two speech tokens for model training and LR estimation. Log-likelihood-ratio cost (Cllr) was used as the assessment metric. The study demonstrates that the HMM system constantly outperforms the GMM system in terms of average Cllr values. However, words longer than three mora are needed if the advantage of the HMM is to become evident. With a seven-mora word, for example, the HMM outperformed the GMM by a Cllr value of 0.073.
Identifying the root cause and impact of a system intrusion remains a foundational challenge in computer security. Digital provenance provides a detailed history of the flow of information within a computing system, connecting suspicious events to their root causes. Although existing provenance-based auditing techniques provide value in forensic analysis, they assume that such analysis takes place only retrospectively. Such post-hoc analysis is insufficient for realtime security applications; moreover, even for forensic tasks, prior provenance collection systems exhibited poor performance and scalability, jeopardizing the timeliness of query responses. We present CamQuery, which provides inline, realtime provenance analysis, making it suitable for implementing security applications. CamQuery is a Linux Security Module that offers support for both userspace and in-kernel execution of analysis applications. We demonstrate the applicability of CamQuery to a variety of runtime security applications including data loss prevention, intrusion detection, and regulatory compliance. In evaluation, we demonstrate that CamQuery reduces the latency of realtime query mechanisms by at least 89%, while imposing minimal overheads on system execution. CamQuery thus enables the further deployment of provenance-based technologies to address central challenges in computer security. Attackers constantly evade intrusion detection systems as new attack vectors sidestep their defense mechanisms. Provenance provides a detailed, structured history of the interactions of digital objects within a system. It is ideal for intrusion detection as it offers a holistic, attack-vector-agnostic view of system execution. We believe that graph analysis on provenance graphs fundamentally strengthens detection robustness. Towards this goal, we discuss opportunities and challenges associated with provenance-based intrusion detection and offer our insights based on our past experience. Open data and open-source software may be part of the solution to sciences reproducibility crisis, but they are insufficient to guarantee reproducibility. Requiring minimal end-user expertise, encapsulator creates a “time capsule” with reproducible code in a self-contained computational environment. encapsulator provides end-users with a fully-featured desktop environment for reproducible research. System security is somewhat stymied because it is difficult, if not impossible, to design system defenses that address the full complexity of a system's interaction. Interestingly, this problem has parallels in understanding how machine learning (ML) algorithms make predictions. Both of these problems require a structured, comprehensive understanding of what a system/model is doing. My dissertation addresses these seemingly disparate problems by exploiting data provenance, which provides just such a solution. I exploit provenance both to design intrusion detection systems and to explain how ML algorithms arrive at their predictions. Developing Big Data Analytics workloads often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions made about data. When errors (e.g., program crash, outlier results, etc.) arise, developers are often interested in identifying a subset of the input data that is able to reproduce the problem. BIGSIFT is a new faulty data localization approach that combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. BIGSIFT redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BIGSIFT improves the accuracy of fault localizability by several orders-of-magnitude (∼103 to 107×) compared to Titian data provenance, and improves performance by up to 66× compared to Delta Debugging, an automated fault-isolation technique. For each faulty output, BIGSIFT is able to localize fault-inducing data within 62% of the original job running time. Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system’s behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications. We present FRAPpuccino (or FRAP), a provenance-based fault detection mechanism for Platform as a Service (PaaS) users, who run many instances of an application on a large cluster of machines. FRAP models, records, and analyzes the behavior of an application and its impact on the system as a directed acyclic provenance graph. It assumes that most instances behave normally and uses their behavior to construct a model of legitimate behavior. Given a model of legitimate behavior, FRAP uses a dynamic sliding window algorithm to compare a new instance’s execution to that of the model. Any instance that does not conform to the model is identified as an anomaly. We present the FRAP prototype and experimental results showing that it can accurately detect application anomalies. An abundance of data in many disciplines has accelerated the adoption of distributed technologies such as Hadoop and Spark, which provide simple programming semantics and an active ecosystem. However, the current cloud computing model lacks the kinds of expressive and interactive debugging features found in traditional desktop computing. We seek to address these challenges with the development of BIGDEBUG, a framework providing interactive debugging primitives and tool-assisted fault localization services for big data analytics. We showcase the data provenance and optimized incremental computation features to effectively and efficiently support interactive debugging, and investigate new research directions on how to automatically pinpoint and repair the root cause of errors in large-scale distributed data processing. Systems Research at Harvard Harvard John A. Paulson School of Engineering and Applied Sciences +1 (310)745-7251 33 Oxford Street Cambridge, MA 02138 [email protected]
Government Drupal Sites Scan This script automatically tests a list of federal website domains to determine if Drupal is being used and identifies the Drupal version. I updated the script to pull data from a relocated file and added a note about a system dependency. You can access the script at https://github.com/alexb7217/drupal-scan. For more information and data, visit the website of Ben Balter, one of the “baddest of the badass innovators,” at https://ben.balter.com/2021-analysis-of-federal-dotgov-domains/technologies/. This script is designed to scan a list of federal website domains and identify if Drupal is being used, as well as the Drupal version being used. This information could be used to analyze the prevalence of Drupal usage among government websites and potentially identify areas where security updates or other maintenance may be needed. The script could also be used to track changes in Drupal usage over time or to monitor the usage of specific versions of Drupal across federal websites.
The experiment we propose to evaluate the performance of our approach is discussed in this section. The real-world Android applications considered in the experimental analysis were obtained from three different application repositories. The first repository is composed by freely available samples with ransomware behaviours belonging to 11 diffused malicious families and gathered using the VirusTotal web service [29 ]. In particular, we consider the following ransomware families: Doublelocker, Koler, Locker, Fusob, Porndroid, Pletor, Lockerpin, Simplelocker, Svpeng, Jisuit and Xbot. Ransomware behaviour in Android environment can basically exhibit two main malicious actions: the first one aimed to lock the device, and the second one is devoted to cipher the user files. Both of these actions ask to the infected user for paying a ransom (typically in bitcon) in order to use their own devices and to access their own files. In particular, the Fusob, Koler, Lockerpin, Locker, Porndroid and Svpeng families exhibit the locker behaviour (i.e., they do not perform any ciphering operation but prevent victims from accessing their devices), the Pletor, Doublelocker and Simplelocker payloads are able to cipher the user files (but when infected with samples belonging to this family users are able to use the device), while samples of the Jisuit and Xbot malicious family exhibits both the locker and cipher typical ransomware behaviours. The second repository we considered is Drebin [30 ], a widespread collection of malware considered by researchers for the evaluation of malware detection methods, including several Android malicious families. No ransomware samples are comprised in the Drebin dataset. In all the exploited malicious datasets, each sample is labelled with respect to the belonging malware family. Basically, each family is grouping applications to the same malicious payload. In Table 2 , a brief description of the malicious behaviours and the number of the applications involved in the experimental analysis is provided. As shown in Table 2 , a total of 2552 malicious samples, belonging to 21 different malicious families, are exploited in the experimental analysis. The last dataset we exploit is composed of 500 legitimate Android applications that we obtained from Google Play, by invoking a script exploiting a python API [32 ] with the aim to search and download apps. The downloaded applications belong to all the 26 different available categories, for instance Comics, Music and Audio, Games, Local Transportation, Weather, Widgets). We considered for each category the most downloaded free applications. The goodware applications were crawled between January 2020 and March 2020. To confirm the trustworthiness of the Goole Play applications we considered the VirusTotal service, aimed to check the applications with 57 different antimalware for instance, Kasperky and Symantec and many other: this analysis confirmed that the goodware applications did not exhibit malicious payload. We take into account this dataset in order to check the false positives but also the true negatives. The (malicious and legitimate) dataset we obtained is composed by 3052 real-world Android applications. 4.2. The -Calculus Formulae In this section, we show an example of -calculus formula we exploited for the detection of malicious payloads in Android environment. The formulae are generated from authors by deeply inspecting the code of a couple of malicious samples for family. The idea is to codify the malicious payload in a Temporal Logic formula to easily verify the maliciousness of an application without additional work from the security analysts, providing also a method for end user for malware detection. We recall also that the proposed method is aimed to exactly detect the package, the class and the method with the related Java byte code instructions performing the harmful action. As previously stated in the method section, we represent Android applications in terms of automaton and we verify several Temporal Logic properties (expressed in -calculus to verify in a first step whether the application under analysis exhibit behaviour potentially malicious and, in a second step, to effectively check the maliciousness). We formulate two properties for each considered family: the first property is aimed to detect if the can exhibit a potentially malicious behaviour belonging to this family, while the second one to confirm the maliciousness of the analysed application. In particular, with the first property, we aim to detect if in the application exists a method showing a behaviour that can be sued for malicious purposes then, with the second property, we verify if this behaviour is effectively considered for malicious purposes. We evaluate the first property on an automaton build by exploiting the CFG of the application, while the second property is evaluated on an automaton built on the application CG. Below, we show a series of code snippets to better understand how we built both the properties and the related automata. In detail, the following snippets are related to a malicious Android sample identified by the 6fdbd3e091ea01a692d2842057717971 hash, belonging to the Simplelocker family. Listing 3 shows the Java code snippet obtained for the sample belonging to the Simplelocker application. Basically, it exhibits several instructions aimed to cipher a file using the AES encryption. |Listing 3: A Java code snippet of a samples belonging to the Simplelocker family. The bytecode obtained from the Java code snippet in Listing 3 is shown in Listing 4. |Listing 4: A Java bytecode snippet of a samples belonging to the Simplelocker family. As shown from the Java bytecode snippet in Listing 4, there are several ldc instructions, aimed to push a one-word constant into the operand stack, containing all the arguments used by the ciphering operation. Moreover, in the bytecode we found also the invocation of classes considered for ciphering, such as MessageDigest, Cipher and SecretKeySpec. As stated in the method section, we resort to bytecode because it is always obtainable, even if the application is obfuscated with strong morphing techniques. In Figure 3 , there is a fragment of CCS automaton obtained from the Java bytecode snippet shown in Listing 4. The fragment basically shows the sequence of the bytecode instruction, with the only exception that the bytecode ldc instructions are replaced with the push ones. shows the Temporal Logic property for the identification of potentially malicious behaviour belonging to the Simplelocker family. The model is resulting true if the different actions (i.e., bytecode instructions) expressed in the property are present in the model, regardless of the number of other actions that are within the model. In this case, the CFG automaton is resulting true, because it exhibits the following actions: pushSHA256 (in the row 2 of the fragment in Figure 3 ), invokegetInstancepushAESCBCPKCS7Padding (in the row 15 of the fragment in Figure 3 ), newjavaxcryptospecSecretKeySpec (in the row 17 of the fragment in Figure 3 ), pushAES (in the row 18 of the fragment in Figure 3 ) and invokeinit (in the row 19 of the fragment in Figure 3 In this case, we mark the application under analysis as potentially malicious. As a matter of fact, there are several applications that can be performed for legitimate purposes ciphering operations; for instance, for data protection. For this reason, as depicted from the proposed method main pictures, shown in Figure 1 and Figure 2 , we perform a deep analysis of the application, by building the CG and the related automaton, to understand all the application paths invoking the method (i.e., the CFG automaton) resulting true to the previous property. Coherently with the proposed method, once at least one method is found (i.e., a CFG automaton) resulting true to a property, we found all the paths in the application under analysis invoking the potentially malicious method. In Listing 5, we show two methods invoking the ciphering method: the first method, starting from line 23 in Listing 5, is looking for all the file paths in the device and stores the file paths in an ArrayList variable (declared as private instance variable). |Listing 5: Two methods invoking the ciphering method. We highlight that the filenames are stored only if the file extension (computed in line 32 starting of Listing 5) is belonging to a value of the org.simplelocker.b.a list, containing all the file extensions to select for the ciphering operations. For reader clarity, in Figure 4 we show the values of this list, i.e., the list of file extension candidates for the ciphering operation. The second method, starting from line 43 in Figure 3 , is aimed to retrieve the ArrayList with the list of the file candidate for the ciphering operation and, by invoking the potentially malicious method in a cycle (i.e., one time for each file), it performs the cipher operation. We highlight that in the Simplelocker family the ciphering password is also hard coded in the application, as shown from line 48 of Listing 5. Other ransomware we considered employ more obfuscation techniques, for instance they require the password from a command and control server, that is able to generate ad-hoc passwords for each infected device (usually by considering the IMEI device). We note also, as shown from lines 51 and 52 of Listing 5 thT, once a file is ciphered, the original file is deleted and the ciphered file is stored with the .enc From the Java code snippet in Listing 5, we generate the Call Graph to build the CG automaton shown in Listing 6. |Listing 6: Java code snippet related to the list of file extensions considered for the ciphering operations. From the CG CCS automaton in Figure 2 emerge several considerations: first of all, there is the creation of an ArrayList (the one containing the list of files to cipher), as shown from line 2. Moreover line 5 is symptomatic of a cyclic invocation on a file (the M37 proc is returning on itself). In line 3 (i.e., proc M39) there is also the delete operation on a file (javaioFile_delete ) and in lines 8 and 9 there are also several action symptomatics of the ciphering operation. In this case, the property for the identification of the malicious behaviour is the one shown in Table 4 The temporal property is aimed to confirm that the sample under analysis is effectively performing a ciphering operation coherently with the Simplelocker family behaviour; in particular, the property verifies that the CG slice automaton simultaneously exhibits the javautilArrayList_init, javaioFile_delete, javaioFile, javaxcryptoCipherOutputStream_write and the javasecurityMessageDigest_update actions, symptomatic of the operation related to file reading, ciphering and deleting. 4.4. Experimental Results Below, we present the results obtained by the proposed approach. To evaluate the effectiveness in term of malicious family detection, we compute the precision, recall, F-measure and accuracy metrics. represents the proportion of the Android applications truly belonging to a certain family among all those which were labelled to belonging to this family. It is the ratio of the number of relevant applications retrieved to the total number of irrelevant and relevant applications retrieved: represents the true positives number and fp represents the false positives number. is defined as the proportion of Android applications assigned to a certain malicious family, among all the Android applications truly belonging to the family under analysis; in other words, how many samples of the family under analysis were retrieved. It represents the ratio of the number of relevant applications retrieved to the total number of relevant applications: is the false negatives number. represents the weighted average between the recall and the precision metrics: represents the classifications fraction resulting as correct and it is calculated as the sum between tp divided by the full set of the the Android applications considered: represents the true negative number. In Table 5 , we present the results we obtained using the proposed approach. The proposed approach reaches an accuracy ranging between 0.97 and 1. In particular, for several families (Plankton, Opfake, Doublelocker, Locker and Simplelocker) we obtain an accuracy equal to 1, symptomatic that we are able to detect all the samples belonging to the specific family without misclassifying samples belonging to others families. An accuracy of 0.97 is obtained for the FakeInstaller family, while for the DroidKungFu, GinMaster, Adrd, Kmin, Geinimi, Koler, Pletor and Svpeng families we obtain an accuracy of 0.98. The remaining families (i.e., DroidDream, Fusob, Jisuit, Lockerpin and Porndroid) obtain an accuracy of 0.99, showing the ability of the proposed method to correctly detect most of the samples with the right belonging family.
The recently updated DualToy Windows Trojan can install itself on Android and iOS devices connected via a USB connection. DualToy Is the Latest Trojan That Affects Both Android and iOS Devices DualToy is not a new thing, the first version of the Trojan appeared in January 2015, and it only infected Android devices. A newer version was identified that can compromise iOS targets as well. A recent spike in infections was detected by security researchers indicating that there are 8000 active samples on the Web. The Trojan is written in the C++ and Delphi programming languages, and its behaviour follows an encoded pattern. Upon intrusion the software downloads and installs the Android Debug Bridge (ADB) and the iTunes drivers. The two utilities are used by DualToy to interact with any connected Android or iOS device. The Trojan assumes that all connected phones and tablets are the property of the computer’s owner. The malware uses the pairing and authorization records stored on the computer to authenticate to the connected device. After access is confirmed DualToy contacts the remote C&C servers and installs applications according to a predefined list. The programmers have included special code that roots the devices and gives the Trojan the ability to install applications without use confirmation in the background. Infected iOS devices are also harvested for their IMEI, IMSI, ICCID, serial number and phone number, currently for unknown reasons. DualToy also collects the user’s Apple ID and stored password which are forwarded in an encrypted form to the malicious servers. The sideloaded applications show ads that generate profit for the operators of the malware. The Trojan, however, has an advanced feature that is implemented – if the user does not connect a smart device to the computer, browser settings modifications are entered that inject ads. DualToy is an example of a cyber threat where the main reasons for infection are generating money through advertising. It can cause potential damage, but the target is not the computer user’s files. DualToy mainly targets China, the United States, UK, Thailand, Spain, and Ireland.
A methodology to counter DoS attacks in mobile IP communication - Publication Type: - Journal Article - Mobile Information Systems, 2012, 8 (2), pp. 127 - 152 - Issue Date: Similar to wired communication, Mobile IP communication is susceptible to various kinds of attacks. Of these attacks, Denial of Service (DoS) attack is considered as a great threat to mobile IP communication. The number of approaches hitherto proposed to prevent DoS attack in the area of mobile IP communication is much less compared to those for the wired domain and mobile ad hoc networks. In this work, the effects of Denial of Service attack on mobile IP communication are analyzed in detail. We propose to use packet filtering techniques that work in different domains and base stations of mobile IP communication to detect suspicious packets and to improve the performance. If any packet contains a spoofed IP address which is created by DoS attackers, the proposed scheme can detect this and then filter the suspected packet. The proposed system can mitigate the effect of Denial of Service (DoS) attack by applying three methods: (i) by filtering in the domain periphery router (ii) by filtering in the base station and (iii) by queue monitoring at the vulnerable points of base-station node. We evaluate the performance of our proposed scheme using the network simulator NS-2. The results indicate that the proposed scheme is able to minimize the effects of Denial of Service attacks and improve the performance of mobile IP communication. © 2012 IOS Press and the authors. All rights reserved. Please use this identifier to cite or link to this item:
Gateway API Glossary¶ A Route bound to a workload's Service by a consumer of a given workload, refining the specific consumer's use of the workload. A gateway controller is software that manages the infrastructure associated with routing traffic across contexts using the Gateway API, analogous to the earlier ingress controller concept. Gateway controllers often, but not always, run in the cluster where they're managing infrastructure. Traffic from workload to workload within a cluster. Endpoint routing is sending requests to a specific Service directly to one of the endpoints of the Service backend, bypassing routing decisions which might be made by the underlying network infrastructure. This is commonly necessary for advanced routing cases like sticky sessions, where the gateway will need to guarantee that every request for a specific session goes to the same endpoint. Traffic from outside a cluster to inside a cluster (and vice versa). A Route bound to a workload's Service by the creator of a given workload, defining what is acceptable use of the workload. Producer routes must always be in the same Namespace as their workload's Service. The part of a Kubernetes Service resource that is a set of endpoints associated with Pods and their IPs. Some east/west traffic happens by having workloads direct requests to specific endpoints within a Service backend. The part of a Kubernetes Service resource that allocates a DNS record and a cluster IP. East/west traffic often - but not always - works by having workloads direct requests to a Service frontend. A service mesh is software that manages infrastructure providing security, reliability, and observability for communications between workloads (east/west traffic). Service meshes generally work by intercepting communications between workloads at a very low level, often (though not always) by inserting proxies next to the workload's Pods. Service routing is sending requests to a specific Service to the service frontend, allowing the underlying network infrastructure (usually or a service mesh) to choose the specific endpoint to which the request is routed. An instance of computation that provides a function within a cluster, comprising the Pods providing the compute, and the Deployment/Job/ReplicaSet/etc which owns those Pods.
The intrusion detection system (IDS) has come a long way since James Anderson helped develop some of the early... By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. concepts in a 1980s white paper, Computer Security Threat Monitoring and Surveillance. We can be thankful that IDS technology has continued to advance, because attack patterns are changing. Virus writers and hacker groups are continuing to coalesce and develop more virulent code. The IDS plays a critical role in protecting the IT infrastructure. An IDS is a great tool for monitoring network activity, detecting unauthorized access, and alerting the appropriate individuals to an intrusion so that counteractions can be taken. An IDS is typically network or host based, and it has a difficult job -- it must quickly process a vast amount of traffic and classify the results. There are many brands of IDS, but they can be grouped into two broad categories: - Anomaly detection: functions by learning what's normal and then alerting to abnormal activity. - Signature detection: functions by matching traffic to a database of known attacks. These attacks have been loaded into the system as signatures. No matter which method of detection you use, one of the most critical choices you will have to make is where to place the sensors. Sensor placement will determine what types of traffic you will detect. This requires some consideration because, after all, a sensor in the demilitarized zone (DMZ) will work well at detecting misuse there but will prove useless against attackers that are inside the network. Final placement will require that you determine what type of activity you are monitoring for and what policies and guidelines management has put forward. Once sensor placement has been determined, you will still need to perform system tuning and configuration. Without specific tuning, the sensor will generate alerts for all traffic that matches a given criterion, regardless of whether the traffic is indeed something that should produce an alert. An IDS must be trained or programmed to look for suspicious activity. There are four basic responses an IDS can produce: - True positive: An alarm was generated, and an event did occur. - True negative: An alarm was not generated, and an event did not occur. - False positive: An alarm was generated, and an event did not occur. - False negative: An alarm was not generated, and an event did occur. The worst of these responses is a false negative. A false negative means that an event did occur but no alert was generated. Spending the appropriate amount of time on tuning can help prevent this. If you would like to get more hands-on IDS experience without sinking a ton of cash, a good place to start is Snort. Snort is a freeware IDS developed by Martin Roesch and Brian Caswell. Snort is a network-based IDS that can be set up on a Linux or Windows host. Although the core program has a command-line interface, many individuals have developed GUIs and add-ons, including SnortSnarf and IDS Center. Snort operates as a network sniffer and logs activity that matches predefined signatures. Signatures can be designed for a wide range of traffic, including Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP). Now that you have been introduced to intrusion detection, I hope you are motivated to start exploring how it could be a useful tool for your organization. A good defense requires detection and response. Intrusion detection can make the difference between a minor security blip and a full-fledged disaster. About the author: Michael Gregg is the president of Superior Solutions Inc., a Houston-based training and consulting firm. He has more than 15 years of experience in IT and is an expert on networking, security and Internet technologies. Michael holds two associate degrees, a bachelor's degree and a master's degree. He presently maintains the following certifications: MCSE, MCT, CTT, A+, N+, CNA, CCNA, CIW Security Analyst and TICSA.
Future of testing: Why Continuous Automated Red Teaming (CART) is making penetration testing and attack simulation tools outdated The constantly changing cyber world and the rapid adoption of cloud and digital transformation have increased the attack surface multifolds. On the other hand, cyber attackers are using sophisticated techniques to make it harder for others to recognise their attacks. Here, they have an inherent advantage -they only need to succeed once. But, defenders must succeed every time to thwart the attacks. However, a key problem is that many times organisations don't have visibility of their complete attack surface that's changing dynamically, making them all the more vulnerable to cyber attacks. Why traditional solutions are no longer sufficient to thwart attacks Organisations have traditionally relied on red teaming to address the challenge. Red teaming is nothing but ethical hacking by security teams carried out on a larger and more extensive scale than traditional security testing to discover the organisation's attack surface, then launch simulated attacks to test their blind spots. Another advantage of red teaming is that it enables security teams to attack any target irrespective of the scope of an IP/application. In spite of these inherent advantages, red teaming is not viable for most organisations because it requires multiple tools, manual effort and only tests a fraction of an organisation’s assets at a specific time. And, this makes it challenging to scale, and is unaffordable for most organisations and is a point-in-time solution. In addition to red testing, organisations have relied on penetration testing and Breach and Attack Simulation tools (BAS). While penetration testing can only be done on known systems or applications, BAS on the other hand requires hardware or software agents to install and function within an organisation. BAS tools simulate real threats and show how an attacker could spread if it has access to an organisation's internal systems. The inherent challenges with traditional security solutions make a strong case for Continuous Automated Red Teaming (CART) - an emerging new technology which discovers the attack surface and launches safe attacks continuously. It also helps to prioritise the vulnerabilities that are most likely to be attacked, which are typically the path of least resistance. To put it simply, CART automates red teaming and is designed to scale the process and make it more efficient allowing for continuous discovery of one's attack surface and continuous testing. This makes CART a game changing strategy in cybersecurity. In addition, CART, unlike penetration testing, finds the attack surface automatically without any inputs. It then launches multiple-stage attacks that range from networks to applications to humans. And, unlike BAS, CART, uses an outside-in approach to attack and does not require any hardware or software. Although hackers are sophisticated and have advanced detection and prevention capabilities, CART can help organisations stay ahead of the game by helping them think like a hacker. An organisation needs to have the ability to discover and map their attack surface and attack them continuously to see all possible ways that an attacker could gain access from the outside-in. CART vs traditional Solutions: Why one needs to think like hackers Today, CART makes way for a more efficient system allowing for continuous discovery of one's attack surface and continuous testing. At FireCompass, we have developed a SaaS platform for CART and Attack Surface Management (ASM). The ‘Attack & Recon Platform’ of FireCompass continuously indexes and monitors the deep, dark and surface webs. The platform automatically discovers an organisation’s digital attack surface including unknown exposed databases, code leaks, cloud buckets, and related security risks. It then launches multi-stage safe attacks, mimicking a real attacker, to help identify breach and attack paths that are otherwise missed out by conventional tools. The different types of attack playbooks includes ransomware, network and application attacks, and social engineering attacks. The platform works with zero knowledge and does not require any software or hardware to identify the risks of an organisation's digital attack surface. FireCompass’ Attack & Recon Platform automates attack planning and thinking, which helps organisations with 20 times faster detection of security risks and 90 percent lower manual effort. Eliminating the need for multiple tools, the platform does not need any hardware, software or agents and takes virtually zero set-up time. The platform presents an exciting proposition for an organisation in the cyber security space to enable organisations to strengthen their security strategies to stay a step ahead of hackers. Applications for Cohort 9 of the NetApp Excellerator and Cohort 3 of NetApp ExcellerateHER, NetApp’s accelerator program geared towards empowering women founders, is now open. Check here for more details. Link: https://bit.ly/3jV2VdW
The third element for managing web use and security is tracking and reporting. HND can run a Network Activity report locally using LELA (Figure 9), which provides a log of all device state changes and activity. Figure 9: Network Activity report - all devices Another tracking component is a report on network activity by device. Selecting the Report and Notification Settings option in HND will redirect you to a Linksys web page where you log in and select the devices you want included in the report, as well as the time intervals you want reported. Intervals available are Daily, Weekly, and Monthly. Reports are then run "in the cloud" and you are notified via email when they are ready. I enabled this feature for daily reporting on all devices and received an email every day with a link to a Linksys website where I could log in and view the reports as shown in Figure 10. Figure 10: Network activity by Device report The data on the top right of the report shows total websites blocked by category on my network. On the top left is a bar chart showing network "risk" by device. My device labeled Mac1 generated the most "risky" traffic. You can do a limited drill-down on each device via the Details section on the bottom, which will show the number of block events by category. This is the extent of the drill-down, however, which is pretty limited. I would have expected report options that displayed the actual blocked URLs, or at least the site domains. A traffic summary of Top N blocked domains across all devices would also be informative. For example, in Figure 10, I've selected my "riskiest" device named Mac1, which appears to have earned that distinction by having 62 blocked Web Advertisement events. I'm not sure how much a threat this category is because there is very little information provided about each of the Content Filter categories. There is some information in the LELA online help. But there is also a well-hidden Home Network Defender manual that doesn't shed much light, either. This is where the ability to see the actual blocked URLs would be very helpful. While it isn't integrated at the router level, HND includes four licenses for TrendMicro Antivirus plus Antispyware software, which I installed on my Vista PC without issue. This bundling helps enhance the value of HND's subscription fee, but that's the limit of its integration. Neither LELA nor HND monitor its use or whether clients are keeping current on AV signatures. For example, the Details option (discussed back in Figure 3) for each device produces a summary of that device's known attributes as shown in Figure 11. Figure 11: Device Details Notice that HND/LELA recognizes my connection to the router at 953 Mbps as well as the fact that I have LELA version 3.1 installed on this PC. I would think it wouldn't take much for TrendMicro to be able to keep tabs on the "health" of its AV client and this would further enhance HND's value (as well as network security). I have touched on how HND compares to other home network security products. But now it's time to dig a little deeper. I have had the opportunity to review multiple security products for small networks, including small-business oriented Unified Threat Management (UTM) routers such as the SonicWall TZ190W, D-Link DFL-CPG310, and Zyxel USG100 and an open source solution called Copfilter. I have also looked at Bill Meade's review of the Yoggie Gatekeeper Pro and Craig Ellison's D-Link's SecureSpot review. The router based solutions from SonicWall, D-Link, and Zyxel are more powerful network devices with dual WAN and VPN functionality as well as Intrusion Detection and Prevention (IDS/IPS) capable firewalls. So comparing HND to them isn't perhaps fair. But sizing up HND against the others is fair game. Copfilter is a pretty cool open source (and subscription free) UTM solution that works with the Linux-based IPCop router/firewall. It lacks content filtering in its base form, however, although you can use another open source application—OpenDNS—to add it. But Copfilter, IPCop and OpenDNS are for the technically adventurous only; installing and configuring them requires skills way beyond the "Mom and Pop" user that is HND's target. Yoggie's Gatekeeper Pro is an interesting option because it can be used with any router. The current version has evolved considerably since our review and is probably due for another look. It offers more features than HND including individual protocol filtering, email spam filtering, download size control and more. But its web filtering has holes similar to HND's and it's more costly, requiring the purchase of $200 hardware and a $70 annual subscription fee (for 5 computers). Of these different security choices, HND is most similar to D-Link's SecureSpot, which is probably its competitive target. SecureSpot is available with D-Link's DIR-625, DIR-628 and DIR-655 routers. HND and SecureSpot are similar in features but differ in implementation. The key difference is that all of HND's functions run in the router, except for its AV and requires a Windows application for administration. SecureSpot runs its parental control, timed access and threat blocking features in the router, as well as its administration console. But SecureSpot requires a Windows-only "thin client" to provide individualized parental control filtering in addition to its AV functions. Table 2 compares HND and SecureSpot key features. |Home Network Defender||SecureSpot| |Control Software||Windows Application||Web Application| |Client AV Licenses||4||3| |Content Filtering (CF)||TrendMicro||Bsecure| |CF Age Groups||4||4| |CF Custom Option||Y||Y| |Free 30 Day Trial||Y||Y| |Routers||Linksys WRT310N, WRT610N (160N "soon")||D-Link DIR-625, DIR-628, DIR-655| (*HND has an introductory price of $49.99/year to expire 60 days after its 2/17/09 launch.) Table 2: HND and SecureSpot Comparison SecureSpot has the edge on HND for reporting (it includes email / SMS block alerts, activity calendar view and you can see the URLs of blocked sites). But HND has the edge for robustness, since all filtering and access control is router-based and doesn't depend on client-based software. Both products, however, have (very) imperfect web content filtering— a trait they share with every other parental control "solution". Pricing is uncannily similar, but note that HND includes four AV seats vs. SecureSpot's three. Both are nice values, compared to buying AV software separately, but more so with HND. According to Pricegrabber, one license for TrendMicro Antivirus plus Antispyware 2008 costs around $30. Thus, for the cost of two AV licenses you get HND for free, plus two more licenses.
From MikroTik Wiki /ip firewall address-list Firewall address lists allow user to create lists of IP addresses grouped together. Firewall filter, mangle and NAT facilities can use address lists to match packets against them. |address (IP address/netmask | IP-IP; Default: )||IP address or range to add to address list| |list (string; Default: )||Name of the address list where to add IP address| The following example creates an address list of people thet are connecting to port 23 (telnet) on the router and drops all further traffic from them. Additionaly, the address list will contain one static entry of address=220.127.116.11/32 (www.example.com): [admin@MikroTik] > /ip firewall address-list add list=drop_traffic address=18.104.22.168/32 [admin@MikroTik] > /ip firewall address-list print Flags: X - disabled, D - dynamic # LIST ADDRESS 0 drop_traffic 22.214.171.124 [admin@MikroTik] > /ip firewall mangle add chain=prerouting protocol=tcp dst-port=23 \ \... action=add-src-to-address-list address-list=drop_traffic [admin@MikroTik] > /ip firewall filter add action=drop chain=input src-address-list=drop_traffic [admin@MikroTik] > /ip firewall address-list print Flags: X - disabled, D - dynamic # LIST ADDRESS 0 drop_traffic 126.96.36.199 1 D drop_traffic 188.8.131.52 2 D drop_traffic 10.5.11.8 [admin@MikroTik] > As seen in the output of the last print command, two new dynamic entries appeared in the address list. Hosts with these IP addresses tried to initialize a telnet session to the router.
Locky encrypts your data and completely changes the filenames When Locky is started it will create and assign a unique 16 hexadecimal number to the victim and will look like F67091F1D24A922B. Locky will then scan all local drives and unmapped network shares for data files to encrypt. When encrypting files it will use the AES encryption algorithm and only encrypt those files that match the following extensions: Affecting file types Locky malware can encrypt 164 file types that can be broken down into 11 categories: Office/Document files (62x): .123, .602, .CSV, .dif, .DOC, .docb, .docm, .docx, .DOT, .dotm, .dotx, .hwp, .mml, .odg, .odp, .ods, .odt, .otg, .otp, .ots, .ott, .pdf, .pot, .potm, .potx, .ppam, .pps, .ppsm, .ppsx, .PPT, .pptm, .pptx, .RTF, .sldm, .sldx, .slk, .stc, .std, .sti, .stw, .sxc, .sxd, .sxi, .sxm, .sxw, .txt, .uop, .uot, .wb2, .wk1, .wks, .xlc, .xlm, .XLS, .xlsb, .xlsm, .xlsx, .xlt, .xltm, .xltx, .xlw, .xml Scripts/Source codes (23x): .asm, .asp, .bat, .brd, .c, .class, .cmd, .cpp, .cs, .dch, .dip, .h, .jar, .java, .js, .pas, .php, .pl, .rb, .sch, .sh, .vb, .vbs Media files (20x): .3g2, .3gp, .asf, .avi, .fla, .flv, .m3u, .m4u, .mid, .mkv, .mov, .mp3, .mp4, .mpeg, .mpg, .swf, .vob, .wav, .wma, .wmv Graphic/Image files (14x): .bmp, .cgm, .djv, .djvu, .gif, .jpeg, .jpg, .NEF, .png, .psd, .raw, .svg, .tif, .tiff Database files (14x): .db, .dbf, .frm, .ibd, .ldf, .mdb, .mdf, .MYD, .MYI, .odb, .onenotec2, .sql, .SQLITE3, .SQLITEDB .7z, .ARC, .bak, .gz, .PAQ, .rar, .tar, .bz2, .tbk, .tgz, .zip CAD/CAM/3D files (8x): .3dm, .3ds, .asc, .lay, .lay6, .max, .ms11, .ms11 (Security copy) .crt, .csr, .key, .p12, .pem Virtual HDD (4x): .qcow2, .vdi, .vmdk, .vmx Data encryption (2x): Virtual currency (1x): Because the file type range is very wide, this malware can also affect a large number of businesses. Locky encrypts files on all fixed drives, removable drives and also on RAM disk drives. Remote drives are not affected. Furthermore, Locky will skip any files where the full pathname and filename contain one of the following strings: tmp, winnt, Application Data, AppData, Program Files (x86), Program Files, temp, thumbs.db, $Recycle.Bin, System Volume Information, Boot, Windows When Locky encrypts a file it will rename the file to the format [unique_id][identifier].locky. So when test.jpg is encrypted it would be renamed to something like F67091F1D24A922B1A7FC27E19A9D9BC.locky. The unique ID and other information will also be embedded into the end of the encrypted file. It is important to stress that Locky will encrypt files on network shares even when they are not mapped to a local drive. As predicted, this is becoming more and more common and all system administrators should lock down all open network shared to the lowest permissions possible. As part of the encryption process, Locky will also delete all of the Shadow Volume Copies on the machine so that they cannot be used to restore the victim’s files. Locky does this by executing the following command: vssadmin.exe Delete Shadows /All /Quiet In the Windows desktop and in each folder where a file was encrypted, Locky will create ransom notes called _Locky_recover_instructions.txt. This ransom note contains information about what happened to the victim’s files and links to the decrypter page. Locky will change the Windows wallpaper to %UserpProfile%\Desktop\_Locky_recover_instructions.bmp, which contains the same instructions as the text ransom notes. Last, but not least, Locky will store various information in the registry under the following keys: - HKCU\Software\Locky\id — The unique ID assigned to the victim. - HKCU\Software\Locky\pubkey — The RSA public key. - HKCU\Software\Locky\paytext — The text that is stored in the ransom notes. - HKCU\Software\Locky\completed — Whether the ransomware finished encrypting the computer
Wikipedia The server is currently unavailable (because it is overloaded or down for maintenance). Retrieved 2016-01-09. ^ "ngx_http_special_response.c". It is intended for cases where another process or server handles the request, or for batch processing. 203 Non-Authoritative Information This response code means returned meta-information set is not exact set If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. http://bookmarq.net/http-error/http-error-code-300.php The server returns no information to the client and closes the connection (useful as a deterrent for malware). 449 Retry With (Microsoft) Wikipedia A Microsoft extension. The 511 status code is designed to mitigate problems caused by "captive portals" to software (especially non-browser agents) that is expecting a response from the server that a request was made Date ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent See Basic access authentication and Digest access authentication. 401 semantically means "unauthenticated", i.e. The response MUST NOT include an entity. 10.2.7 206 Partial Content The server has fulfilled the partial GET request for the resource. Retrieved October 11, 2009. ^ a b "Using token-based authentication". Retrieved October 24, 2009. ^ "Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content, Section 6.4". DNS) it needed to access in attempting to complete the request. This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. httpstatus. Bad command or file name Halt and Catch Fire HTTP 418 Out of memory Lists List of HTTP status codes List of FTP server return codes Related Kill screen Spinning pinwheel Http Code 403 The response MUST include the following header fields: - Either a Content-Range header field (section 14.16) indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for The entity format is specified by the media type given in the Content- Type header field. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html Does the user that owns the web server worker process have privileges to traverse to the directory that the requested file is in? (Hint: directories require read and execute permissions to Intended to prevent "the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the Http Code 302 If the client is sending data, a server implementation using TCP SHOULD be careful to ensure that the client acknowledges receipt of the packet(s) containing the response, before the server closes Retrieved October 26, 2009. ^ "MS-ASCMD, Section 18.104.22.168.2". Retrieved 2016-10-12. about tech. https://developer.mozilla.org/en-US/docs/Web/HTTP/Status The response MUST include the following header fields: Either a Content-Range header field (section 14.16) indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for each Http Status Codes Cheat Sheet The new URI is not a substitute reference for the originally requested resource. Http Error Wordpress General Troubleshooting Tips When using a web browser to test a web server, refresh the browser after making server changes Check server logs for more details about how the server is Wikipedia The request was a legal request, but the server is refusing to respond to it. http://bookmarq.net/http-error/http-error-code-409.php Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. Unlike a 204 response, this response requires that the requester reset the document view. 206 Partial Content (RFC 7233) The server is delivering only part of the resource (byte serving) due Note: HTTP/1.1 servers are allowed to return responses which are not acceptable according to the accept headers sent in the request. Http 422 The phrases used are the standard wordings, but any human-readable alternative can be provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. The server MUST send a final response after the request has been completed. Check This Out So, for example, submitting a form to a permanently redirected resource may continue smoothly. 4xx Client Error 404 error on German Wikipedia The 4xx class of status code is intended for Wikipedia The server either does not recognise the request method, or it lacks the ability to fulfill the request. 502 Bad Gateway The server, while acting as a gateway or proxy, Http 404 The .htaccess file can be used to deny access of certain resources to specific IP addresses or ranges, for example. A cache MUST NOT combine a 206 response with other previously cached content if the ETag or Last-Modified headers do not match exactly, see 13.5.4. Google. 2015. org.springframework.http. When received in response to a POST (or PUT/DELETE), it should be assumed that the server has received the data and the redirect should be issued with a separate GET message. Http 409 Stack Overflow. HTTP status codes are three-digit codes, and are grouped into five different classes. Cloudflare. Retrieved 16 October 2015. ^ "Twitter Error Codes & Responses". this contact form For use when authentication is possible but has failed or not yet been provided 402 Payment Required Reserved for future use 403 Forbidden The request was a legal request, but the The client MAY repeat the request with a suitable Proxy-Authorization header field (section 14.34). Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Visit an individual status code via httpstatuses.com/code or browse the list below. @ Share on Twitter⊕ Add to Pinboard 1×× Informational 100 Continue 101 Switching Protocols 102 Processing 2×× Success 200 The entity format is specified by the media type given in the Content- Type header field. Retrieved 2016-01-09. ^ "Troubleshooting: Error Pages". Tools.ietf.org. Here are brief explanations for the most common status and error codes. It states: The redirection happens as a "302 Moved" header unless otherwise specified.". https://tools.ietf.org/html/rfc2324. ^ Barry Schwartz (26 August 2014). "New Google Easter Egg For SEO Geeks: Server Status 418, I'm A Teapot". This has the same semantics as the 301 Moved Permanently HTTP response code, with the exception that the user agent must not change the HTTP method used: if a POST was The client MAY repeat the request if it adds a valid Content-Length header field containing the length of the message-body in the request message. Clearing the browser's cache and cookies could solve this issue Malformed request due to a faulty browser Malformed request due to human error when manually forming HTTP requests (e.g. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. Authorization will not help and the request SHOULD NOT be repeated. This presents many security issues; e.g., an attacking intermediary may be inserting cookies into the original domain's name space, may be observing cookies or HTTP authentication credentials sent from the user Details Last Modified: 3 Years Ago Last Modified By: GlobalSCAPE 5 Type: ERRMSG Rated 2 stars based on 242 votes. Retrieved 16 October 2015. ^ Singh, Prabhat; user1740567. "Spring 3.x JSON status 406 "characteristics not acceptable according to the request "accept" headers ()"". Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required. Retrieved May 21, 2009. ^ "Mozilla Bugzilla Bug 187996: Strange behavior on 305 redirect, comment 13". Retrieved October 24, 2009. ^ Nielsen, Henrik Frystyk; Leach, Paul; Lawrence, Scott (February 2000). The actual response will depend on the request method used. When connected via HTTP, CuteFTP and HTTP servers to which you connect can display these codes in the log window.
In this paper, we consider the IoT data discovery data objects to specific nodes in the network. They are very problem in very large and growing scale networks. Specifically, we investigate in depth the routing table summarization techniques to support effective and space-efficient IoT data discovery routing. Novel summarization algorithms, including alphabetical based, hash based, and meaning based summarization and their corresponding coding schemes are proposed. The issue of potentially misleading routing due to summarization is also investigated. Subsequently, we analyze the strategy of when to summarize in order to balance the tradeoff especially in handling MAA based lookups. between the routing table compression rate and the chance of Unstructured discovery routing approaches, such as , causing misleading routing. For experimental study, we have collected 100K IoT data streams from various IoT databases as the input dataset. Experimental results show that our summarization solution can reduce the routing table size by 20 to 30 folds with 2-5% increase in latency when compared with similar peer-to-peer discovery routing algorithms without summarization. Also, our approach outperforms DHT based approaches by 2 to 6 folds in terms of latency and traffic.
Authors: Timothy Bollé (University of Lausanne) and Eoghan Casey, Ph.D. (University of Lausanne) DFRWS EU 2018 This work addresses the challenge of discerning non-exact or non-obvious similarities between cybercrimes, proposing a new approach to finding linkages and repetitions across cases in a cyberinvestigation context using near similarity calculation of distinctive digital traces. A prototype system was developed to test the proposed approach, and the system was evaluated using digital traces collected during actual cyber-investigations. The prototype system also links cases on the basis of exact similarity between technical characteristics. This work found that the introduction of near similarity helps to confirm already existing links, and exposes additional linkages between cases. Automatic detection of near similarities across cybercrimes gives digital investigators a better understanding of the criminal context and the actual phenomenon, and can reveal a series of related offenses. Using case data from 207 cyber-investigations, this study evaluated the effectiveness of computing similarity between cases by applying string similarity algorithms to email addresses. The Levenshtein algorithm was selected as the best algorithm to segregate similar email addresses from non-similar ones. This work can be extended to other digital traces common in cybercrimes such as URLs and domain names. In addition to finding linkages between related cybercrime at a technical level, similarities in patterns across cases provided insights at a behavioral level such as modus operandi (MO). This work also addresses the step that comes after the similarity computation, which is the linkage verification and the hypothesis formation. For forensic purposes, it is necessary to confirm that a near match with the similarity algorithm actually corresponds to a real relation between observed characteristics, and it is important to evaluate the likelihood that the disclosed similarity supports the hypothesis of the link between cases. This work recommends additional information, including certain technical, contextual and behavioral characteristics that could be collected routinely in cyber-investigations to support similarity computation and link evaluation.
Malware researchers at Yoroi – Cybaze Z-Lab analyzed the MuddyWater Infection Chain observed in a last wave of cyber attacks. At the end of November, some Middle East countries have been targeted by a new wave of attacks related to the Iranian APT group known as “MuddyWater“: their first campaign was observed back in 2017 and more recently Unit42 researchers reported attacks in the ME area. The MuddyWater’s TTPs seem to be quite invariant during this time-period: they keep using spear-phishing emails containing blurred document in order to induce the target to enable the execution of VB-macro code, to infect the host with POWERSTAT malware. According to the analysis of ClearSky Research Team and TrendMicro researchers, at the end of November, MuddyWater group hit Lebanon and Oman institutions and after a few days Turkish entities. The attack vector and the final payload of were the same: the usual macro-embedded document and the POWERSTAT backdoor respectively. However, the intermediate stages were slightly different than usual. The Yoroi-Cybaze Zlab researchers analyzed the file “Cv.doc”, the blurred resume used by MuddyWater during their Lebanon/Oman campaign. When the victim enables the MACRO execution, the malicious code creates an Excel document containing the necessary code to download the next-stage of the malicious implant. At the same time, it shows a fake error popup saying the Office version is incompatible. The macro code is decrypted before the execution with the following custom routine: After the deobfuscation of the code, it’s possible to identify the function used to create the hidden Excel document within the “x1” variable: The macro placed into the new Excel downloads powershell code from an URL apparently referencing a PNG image file “http://pazazta[.]com/app/icon.png”. The downloaded payload is able to create three new local files: - C:WindowsTempWindows.vbe, containing an encoded Visual Basic script; - C:ProgramDataMicrosoft.db, containing the encrypted final payload. In fact, the next malicious stage is executed only when the “Math.round(ss) % 20 == 19” condition is met, otherwise it keeps re-executing itself. The “ss” variable stores the past seconds since 1 January 1970 00:00:00. The final stage consists in the execution of the POWERSTATS backdoor contained into the “Microsoft.db” file. The backdoor contacts a couple of domain names: “hxxp://amphira[.com” and “hxxps://amorenvena[.com”, each one pointing to the same ip address 18.104.22.168 (EU-LINODE-20141229 US). One executed, the POWERSTAT malware sends generic information about the victim’s machine to the remote server through an encoded HTTP POST request: Then, it starts its communication protocol with the C2, asking for commands to execute on the compromised host. The HTTP parameter “type” classifies the kind request performed by the malicious implant, during the analysis the following values have been observed: - info: used in POST request to send info about the victim; - live: used in POST request as ping mechanism; - cmd: used both in POST and GET requests. In the first case it sends the last command executed, in the second one it retrieves a new command from server; - res: used in a POST request to send the result of the last command that the malware has executed. The parameter “id”, instead, uniquely identify the victim machine and it is calculated using the local system info, despite the sample analyzed by TrendMicro which uses only the hard drive serial number. This identifier is also used to create a file into the “C:ProgramData” folder, used to store temporary information. Analyzing the code extracted and deobfuscated from the “Microsoft.db” file, it is possible to investigate the real capabilities of the POWERSTATS backdoor, identifying the functionalities supported by a malicious implant, such as: - upload: the malware downloads a new file from the specified URL; - cmd: the malware executes the specified command; - b64: the malware decodes and executes a base64 PowerShell script; - muddy: the malware creates a new encrypted file in “C:\ProgramDataLSASS” containing a powershell script and runs it. The malware implements more than one persistence mechanism. These mechanisms are triggered only in the final stage of the infection, once the POWERSTATS backdoor is executed. The persistence functionalities use simple and known techniques such as redundant registry keys within the “MicrosoftWindowsCurrentVerisonRun” location: And the creation of a scheduled task named “MicrosoftEdge”, started every day at 12 o’clock. This last campaign of the Iranian ATP group “MuddyWater“ shows a clear example of how hacking groups can leverage system’s tools and scripting languages to achieve their objectives, maintain a foothold within their target hosts and exfiltrate data. These attacks also leverage macro-embedded document as the initial vector, showing how this “well-known” technique can still represent a relevant threat, especially if carefully prepared and contextualized to lure specific victims. Technical details, including Indicator of compromise and Yara rules are reported in the analysis published on the Yoroi blog.https://blog.yoroi.company/research/dissecting-the-muddywater-infection-chain/ The post Experts at Yoroi – Cybaze Z-Lab analyzed MuddyWater Infection Chain appeared first on Security Affairs.
Telstra to DNS-block botnet C&Cs with unknown blacklist What could possibly go wrong other than a C&C net sharing your colo barn's IP address? Telstra is preparing to get proactive with malware, announcing that it will be implementing a DNS-based blocker to prevent customer systems from contact known command-and-control servers. The “malware suppression” tool will will be introduced at no cost for fixed, mobile and NBN customers using domestic broadband and Telstra Business Broadband services. The service is using a command-and-control address list sourced from an unnamed Californian partner, and the carrier maintains that it won't be recording users' browsing history. However, there seems to be a little confusion between different arms of the carrier as to how the malware suppression service works. Here's how the promotional blog post discusses the technology: “Because the malware suppression technology only observes DNS queries and not internet traffic, no internet search history, browsing data or any other customer data is recorded, retained or sent to a third party.” (Vulture South notes that the last time we looked, DNS queries travelled over the Internet. We therefore conclude that Telstra is trying to reassure customers that the content of their browsing is not examined.) In its support Q&A, the carrier states: “We do not retain a record of legitimate DNS queries made by your computer and those legitimate queries will be unaffected by the new malware suppression” (emphasis added). As the same page notes, if the carrier has reason to query (sorry) a DNS query, it will fire off a query to California: “At times, the DNS server may notice a pattern of queries from a number of different users which looks suspicious (for example, why would a real user try to go to a domain like qwe54fggty.dyndns.biz?). In this case, information about the suspicious target domain might be sent to our partner in California to examine whether the domain is a botnet or command & control server.” However, it states, in requesting that a domain be examined by its blacklist supplier, it will not pass on any information to identify the user or users trying to contact that domain. In response to The Register's questions, a Telstra spokesperson provided this statement: "We are introducing malware suppression technology to the Telstra BigPond Network to help improve safety and security of the internet for our customers. We have developed the upgrade to our network with a technology partner, a firm based in the United States. The malware suppression technology does not look at any content our customers are sending or receiving, rather it prevents our customer's computers from being controlled by Command and Control servers. The malware suppression service being deployed on the Telstra BigPond Network works on DNS queries only going to verified Command and Control servers." Which is likely to be all very well and good, until some poor sap finds their IP address lives on a server also occupied by a C&C server. Such a scenario is not beyond the realms of possibility: in may 2013 Australia's de facto internet filter blocked access to hundreds of sites when the intention was to block just one. Telstra must be hoping its un-named source of C&C systems doesn't make the same mistake. ®
International Journal of Engineering Research and Applications (IJERA) Cloud computing is a heavily evolving topic in Information Technology (IT). Rather than creating, deploying and managing a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. Due to this large scale, in case an attack over the network of cloud, it's a great challenge to investigate to cloud. There is a very low research done to develop the theory and practice of cloud forensic.
What is a Zero Trust Architecture Zero Trust has become one of cybersecurity’s latest buzzwords. It’s imperative to understand what Zero Trust is, as well as what Zero Trust isn’t. Zero Trust is a strategic initiative that helps prevent successful data breaches by eliminating the concept of trust from an organization’s network architecture. Rooted in the principle of “never trust, always verify,” Zero Trust is designed to protect modern digital environments by leveraging network segmentation, preventing lateral movement, providing Layer 7 threat prevention, and simplifying granular user-access control. Zero Trust was created by John Kindervag, during his tenure as a vice president and principal analyst for Forrester Research, based on the realization that traditional security models operate on the outdated assumption that everything inside an organization’s network should be trusted. Under this broken trust model, it is assumed that a user’s identity is not compromised and that all users act responsibly and can be trusted. The Zero Trust model recognizes that trust is a vulnerability. Once on the network, users – including threat actors and malicious insiders – are free to move laterally and access or exfiltrate whatever data they are not limited to. Remember, the point of infiltration of an attack is often not the target location. According to The Forrester Wave™: Privileged Identity Management, Q4 2018, This trust model continues to be abused credentials.1 Zero Trust is not about making a system trusted, but instead about eliminating trust. A Zero Trust Architecture In Zero Trust, you identify a “protect surface.” The protect surface is made up of the network’s most critical and valuable data, assets, applications and services – DAAS, for short. Protect surfaces are unique to each organization. Because it contains only what’s most critical to an organization’s operations, the protect surface is orders of magnitude smaller than the attack surface, and it is always knowable. With your protect surface identified, you can identify how traffic moves across the organization in relation to protect surface. Understanding who the users are, which applications they are using and how they are connecting is the only way to determine and enforce policy that ensures secure access to your data. Once you understand the interdependencies between the DAAS, infrastructure, services and users, you should put controls in place as close to the protect surface as possible, creating a microperimeter around it. This microperimeter moves with the protect surface, wherever it goes. You can create a microperimeter by deploying a segmentation gateway, more commonly known as a next-generation firewall, to ensure only known, allowed traffic or legitimate applications have access to the protect surface. The segmentation gateway provides granular visibility into traffic and enforces additional layers of inspection and access control with granular Layer 7 policy based on the Kipling Method, which defines Zero Trust policy based on who, what, when, where, why and how. The Zero Trust policy determines who can transit the microperimeter at any point in time, preventing access to your protect surface by unauthorized users and preventing the exfiltration of sensitive data. Zero Trust is only possible at Layer 7. Once you’ve built your Zero Trust policy around your protect surface, you continue to monitor and maintain in real time, looking for things like what should be included in the protect surface, interdependencies not yet accounted for, and ways to improve policy. Zero Trust: As Dynamic as Your Enterprise Zero Trust is not dependent on a location. Users, devices and application workloads are now everywhere, so you cannot enforce Zero Trust in one location – it must be proliferated across your entire environment. The right users need to have access to the right applications and data. Users are also accessing critical applications and workloads from anywhere: home, coffee shops, offices and small branches. Zero Trust requires consistent visibility, enforcement and control that can be delivered directly on the device or through the cloud. A software-defined perimeter provides secure user access and prevents data loss, regardless of where the users are, which devices are being used, or where your workloads and data are hosted (i.e. data centers, public clouds or SaaS applications). Workloads are highly dynamic and move across multiple data centers and public, private, and hybrid clouds. With Zero Trust, you must have deep visibility into the activity and interdependencies across users, devices, networks, applications and data. Segmentation gateways monitor traffic, stop threats and enforce granular access across north-south and east-west traffic within your on-premises data center and multi-cloud environments. Deploying Zero Trust Achieving Zero Trust is often perceived as costly and complex. However, Zero Trust is built upon your existing architecture and does not require you to rip and replace existing technology. There are no Zero Trust products. There are products that work well in Zero Trust environments and those that don't. Zero Trust is also quite simple to deploy, implement and maintain using a simple five-step methodology. This guided process helps identify where you are and where to go next: Identify the protect surface Map the transaction flows Build a Zero Trust architecture Create Zero Trust policy Monitor and maintain Creative a Zero Trust environment – consisting of a protect surface that contains a single DAAS element protected by a microperimeter enforced at Layer 7 with Kipling Method policy by a segmentation gateway – is a simple and iterative process you can repeat one protect surface/DAAS element at a time. To learn more about Zero Trust and implementing it within your organization, read the white paper, Simplify Zero Trust Implementation with a Five-Step Methodology. How to Achieve a Zero Trust Architecture Use Zero Trust to gain visibility and context for all traffic – across user, device, location and application – plus zoning capabilities for visibility into internal traffic. To gain traffic visibility and context, it needs to go through a next-generation firewall with decryption capabilities. The next-generation firewall enables micro-segmentation of perimeters, and acts as border control within your organization. While it’s necessary to secure the external perimeter border, it’s even more crucial to gain the visibility to verify traffic as it crosses between the different functions within the network. Adding two factor authentication and other verification methods will increase your ability to verify users correctly. Leverage a Zero Trust approach to identify your business processes, users, data, data flows, and associated risks, and set policy rules that can be updated automatically, based on associated risks, with every iteration. To learn more about Zero Trust and implementing Zero Trust networks, read the whitepaper, "5 Steps to Zero Trust" or view the “How to Enable Zero Trust Security for your Data Center” webinar. You can also view the following pages on the Palo Alto Networks website for additional information: - Network Segmentation/Zero Trust - Next-Generation Firewall - VM-Series Virtualized Next-Generation Firewall 1 The Forrester Wave™: Privileged Identity Management, Q4 2018. https://www.forrester.com/report/The+Forrester+Wave+Privileged+Identity+Management+Q4+2018/-/E-RES141474
The last windows that we will discuss are those that IDA does not open by default. Each of these windows is available via View ▸ Open Subviews, but they tend to provide information to which you may not require immediate access and are thus initially kept out of the way. The Strings window is the built-in IDA equivalent of the strings utility and then some. In IDA versions 5.1 and earlier, the Strings window was open as part of the default desktop; however, with version 5.2, the Strings window is no longer open by default, though it remains available via View ▸ Open Subviews ▸ Strings. The purpose of the Strings window is to display a list of strings extracted from a binary along with the address at which each ...
4help – RansomwareThe 4help stands for a ransomware-type infection. The virus comes from the Dharma ransomware family. 4help was elaborated specifically to encrypt all major file types. Once the file is encrypted people are not able to use them. 4help adds the “.[[email protected]].4help” extension for each file encrypted by it. For example, the file “myphoto.jpg“, as soon as encrypted by 4help, will be renamed into “myphoto.jpg.[[email protected]].4help“. As soon as the encryption is finished, 4help places a special text file into every folder containing the encrypted data. The message given by 4help text file requesting the ransom is absolutely the same as the statements given by other ransomware representatives belonging to the Dharma clan. It actually discusses that the info is encrypted and that the only way to bring back it is to use a an unique decryption key. Unfortunately, this is absolutely true. The type of cryptography mechanism applied by 4help is still not appropriately examined. Still, it is absolutely certain that each victim may be given the specific decryption key, which is completely distinct. It is impossible to restore the files without the key available. Another trick of 4help is that the victims cannot gain access to the key. The key is kept on a particular server run by the frauds related to 4help ransomware. To get the key and recover the important information people need to pay the ransom. Nevertheless, regardless of the asked for amount, people need to stay away from paying the ransom virus. Cyber frauds are not fair, so they tend to completely disregard what their victims feel about the issue, even when the payment reaches their pockets. This is why paying the ransom normally does not give any positive result and people simply waste their money for nothing. We strongly recommend that you do not contact these crooks and definitely do not transfer money into their accounts. It is said to admit that there are no utilities able to crack 4help ransomware and to recover the information data for free. Thus, the just best decision is to recover the lost data from the available backup. |Short Description||The ransomware encrypts all the data stored on your system and requires a ransom to be paid on your part supposedly to recover your important files.| |Symptoms||File encryption by the ransomware is performed by means of the AES and RSA encryption algorithms. Once the encryption is completed, the ransomware adds its special [[email protected]].4help extension to all the files modified by it.| |Distribution Method||Spam Emails, Email Attachments| |Similar Infections||Text, Wcg, Con30| |Removal Tool||GridinSoft Anti-Malware| Remember that the world wide web is now overwhelmed with threats that look comparable to 4help ransomware. It is similar Text and many other ransomware-type threats. Malicious programs of such kind are normally elaborated to encrypt essential information and to set forth the need before the user to pay the ransom. The peculiarity of all such ransomware threats is that all apply a similar algorithm to generate the distinct decryption key for information decryption. Therefore, as long as the ransomware is still being developed or has some hidden bugs, manually recovering the information is merely not feasible. The only method to prevent the loss of your essential data is to regularly create backups of your important information. Bear in mind that even if you create such backups, they should be placed into a special storage utility not connect to your main PC. You may use the USB Flash Drive or external hard drive for this purpose, or refer to the help of the cloud storage. If you store your backup files on your common system they may be encrypted together with other files, so it’s definitely not a good storage place. How did ransomware infect my PC? There are numerous methods used by online scams to distribute 4help ransom virus. Despite the fact that it is uncertain how precisely 4help injects your system, there are some leaks through which it may infiltrate the system: - integration with third-party software, especially freeware; - spam e-mails from unidentified senders; - websites rendering free hosting services; - pirated peer-to-peer (P2P) downloads. Frequently 4help virus might exist as some genuine software application, for instance, in the pop-ups instructing users to carry out some essential software updates. This is the typical trick used by online frauds to persuade people into downloading and installing 4help infection manually, by means of their direct participation in the installation process. Additionally, the criminals might describe different email spam methods to inject malicious codes into PC. So, they may describe to sending unsolicited spam e-mails with tricky notices promoting users to download the attachments or click on certain download links, for example, the ones encouraging users to open some photos, documents, tax reports or invoices. Needless to mention, opening such files or clicking on such dangerous links may badly harm the PC. Fictitious Adobe Flash Player upgrade notifies may result in 4help virus injection. When it comes to the cracked applications, these illegally downloaded programs may also include harmful codes causing 4help secret installation. Finally, injection of 4help may occur by means of Trojans that secretly get injected into the system and install harmful utilities without the user’s consent. Is there any way to prevent the injection of 4help ransom virus? Despite the fact that there is no 100% guarantee to prevent your computer from getting infected, there are some pieces of guidance we wish to show with you. First of all, be extremely mindful when you browse the web and especially while downloading totally free programs. Keep away from opening suspicious email attachments, especially when the sender of the email is not familiar to you. Remember that some freeware installers may include other unwanted utilities in the package, so they may be malicious. Ensure that your current anti-virus and your entire OS is always duly updated. Obviously, downloading pirated software is illegal and may lead to vital damage to be made for your system. Thus, stay away from downloading cracked software. You are likewise strongly recommended to reconsider your existing security software and perhaps change to another security solution that can render much better services of protecting your computer. Below please find the quotation from the 4help text file: Pop-up window: YOUR FILES ARE ENCRYPTED Don\'t worry,you can return all your files! If you want to restore them, follow this link:email [email protected] YOUR ID - If you have not been answered via the link within 12 hours, write to us by e-mail:[email protected] Attention! Do not rename encrypted files. Do not try to decrypt your data using third party software, it may cause permanent data loss. Decryption of your files with the help of third parties may cause increased price (they add their fee to our) or you can become a victim of a scam. ================================ FILES ENCRYPTED.txt: all your data has been locked us You want to return? write email [email protected] or [email protected] Use GridinSoft Anti-Malware to remove 4help ransomware from your computer 1.Download GridinSoft Anti-Malware. You can download GridinSoft Anti-Malware by clicking the button below: 2. Double-click on the setup file. When setup file has finished downloading, double-click on the setup-antimalware-ag.exe file to install GridinSoft Anti-Malware on your computer. An User Account Control asking you about to allow GridinSoft Anti-Malware to make changes to your device. So, you should click “Yes” to continue with the installation. 3. Press Install button for run GridinSoft Anti-Malware. 3.Once installed, GridinSoft Anti-Malware will automatically run. 4. Wait for the GridinSoft Anti-Malware scan to complete. GridinSoft Anti-Malware will automatically start scanning your computer for Win Speedup 2018 and other malicious programs. This process can take a 20-30 minutes, so we suggest you periodically check on the status of the scan process. 5. Click on “Clean Now”. User Review( votes)
While we at GreyNoise have been collecting, analyzing, and labeling internet background noise, we have come to identify patterns among scanners and background noise traffic. Often we’ll see a group of IPs that have the same User-Agent or are sending payloads to the same web path, even though they are coming from different geo-locations. Or, we might see a group that uses the same OS and scanned all the same ports, but they have different rDNS lookups. Or any other combination of very similar behaviors with slight differences that show some version of distributed or obfuscated coordination. With our new IP Similarity feature, we hope to enable anyone to easily sniff out these groups without having an analyst pore over all the raw data to find combinations of similar and dissimilar information. Stay tuned for an in-depth blog covering how we made this unique capability a reality, but for now, here’s a quick snapshot of what the feature does and the use cases it addresses. GreyNoise has a very rich dataset with a ton of features. For IP Similarity we are using a combination of relatively static IP-centric features, things we can derive just from knowing what IP the traffic is coming from or their connection metadata, and more dynamic behavioral features, things we see inside the traffic from that IP. These features are: Of note, for this analysis we do not use GreyNoise-defined Tags, Actors, or Malicious/Benign/Unknown status, as these would bias our results based on our own derived information. The output of the IP Similarity feature has been pretty phenomenal, which is why we’re so excited to preview it. We can take a single IP from our friends at Shodan.io, https://viz.greynoise.io/ip-similarity/126.96.36.199, and return 19 (at the time of writing) other IPs from Shodan, And we can compare the IPs side by side to find out why they were scored as similar. While we have an Actor tag for Shodan which allows us to see that all of these are correct, IP Similarity would have picked these out even if they were not tagged by GreyNoise. As with any machine learning application, the results of IP Similarity will need to be verified by an aware observer, but this new feature holds a lot of promise for allowing GreyNoise users to automatically find new and interesting things related to their investigations. In fact, we see some immediate use cases for IP Similarity to help accelerate and close investigations faster, with increased accuracy, and provide required justifications before acting on the intelligence. For example: For more on how you can use IP similarity in your investigations, check out our recent blog from Nick Roy covering use cases of IP similarity. You can also read more about IP similarity in our documentation. IP Similarity is available as an add on to our paid GreyNoise packages and to all VIP users. If you’re interested in testing these features, sign up for a free trial account today!* (*Create a free GreyNoise account to begin your enterprise trial. Activation button is on your Account Plan Details page.)
At the initial stage of developing parallel machines, a software monitor, which manages communication between host computers, program loading and debugging, is necessary. However, it is often a cumbersome job to develop such a monitoring system especially when the target takes a parallel architecture. To solve this problem, we developed an integrated monitor system called "Pot". "Pot" consists of a system runs on the host computer and simple code on a target machine. In order to reduce the development costs, the program on a target machine is as simple as possible while "Pot" on the host computer itself provides various functions for system development. |ジャーナル||IEICE Transactions on Information and Systems| |出版ステータス||Published - 2003 10月| ASJC Scopus subject areas - コンピュータ ビジョンおよびパターン認識
Lepus is a utility for identifying and collecting subdomains for a given domain. Subdomain discovery is a crucial part during the reconnaissance phase. One of the strength of Lepus lies at Performing several checks on identified domains for potential subdomain-takeover vulnerabilities. The module is enabled with --takeover and is executed after all others. If such a vulnerability is identified, the results are printed in the output and in a .csv file in the respective project folder under the directory with the results. Checks are performed for the following services. Lepus performs the following. Services (Collecting subdomains from the below services) Dictionary mode for identifying domains (optional) Permutations on discovered subdomains (optional) Reverse DNS lookups on identified public IPs (optional)
The Whois XML Reverse WHOIS API allows developers to find all domain names which contain a specified search term (i.e., name, email, address, phone, etc.) in their WHOIS records. Query results provide all the domain records that correspond to the search terms used and are made available in XML & JSON formats. With the API, it's possible to discover all domain names associated with an individual or an organization as well as find connections with other domains and their owners. Practical usages include cybersecurity, law enforcement, brand protection, marketing research, and cyber fraud prevention.
Manage and respond to security alerts in Azure Security Center This topic shows you how to view and process Security Center's alerts and protect your resources. Advanced detections that trigger security alerts are only available with Azure Defender. A free trial is available. To upgrade, see Enable Azure Defender. What are security alerts? Security Center automatically collects, analyzes, and integrates log data from your Azure resources, the network, and connected partner solutions, like firewall and endpoint protection solutions, to detect real threats and reduce false positives. A list of prioritized security alerts is shown in Security Center along with the information you need to quickly investigate the problem and recommendations for how to remediate an attack. To learn about the different types of alerts, see Security alerts - a reference guide. For an overview of how Security Center generates alerts, see how Azure Security Center detects and responds to threats. Manage your security alerts From Security Center's overview page, select the Security alerts tile at the top of the page, or the link from the sidebar.. The security alerts page opens. To filter the alerts list, select any of the relevant filters. You can optionally add further filters with the Add filter option. The list updates according to the filtering options you've selected. Filtering can be very helpful. For example, you might you want to address security alerts that occurred in the last 24 hours because you are investigating a potential breach in the system. Respond to security alerts From the Security alerts list, select an alert. A side pane opens and shows a description of the alert and all the affected resources. With this side pane open, you can quickly review the alerts list with the up and down arrows on your keyboard. For further information, select View full details. The left pane of the security alert page shows high-level information regarding the security alert: title, severity, status, activity time, description of the suspicious activity, and the affected resource. Alongside the affected resource are the Azure tags relevant to the resource. Use these to infer the organizational context of the resource when investigating the alert. The right pane includes the Alert details tab containing further details of the alert to help you investigate the issue: IP addresses, files, processes, and more. Also in the right pane is the Take action tab. Use this tab to take further actions regarding the security alert. Actions such as: - Mitigate the threat - provides manual remediation steps for this security alert - Prevent future attacks - provides security recommendations to help reduce the attack surface, increase security posture, and thus prevent future attacks - Trigger automated response - provides the option to trigger a logic app as a response to this security alert - Suppress similar alerts - provides the option to suppress future alerts with similar characteristics if the alert isn’t relevant for your organization In this document, you learned how to view security alerts. See the following pages for related material:
We hope it never happens, but we need a plan to deal with ‘incidents’ should we ever suspect one is happening. This could be anything from an application issue to a suspected compromise. How do we capture needed environment details on the spot and carry out a full investigation? We’ll demonstrate the tools and processes that everyone should be familiar with when running in a cloud environment. Event schedule → http://g.co/next18 Watch more Security sessions here → http://bit.ly/2zJTZml Next ‘18 All Sessions playlist → http://bit.ly/Allsessions Subscribe to the Google Cloud channel! → http://bit.ly/NextSub Publisher: Google Cloud You can watch this video also at the source.