content
stringlengths 194
506k
|
---|
Defense against deepfake attacks and extortion is a hot topic in the cyber security world. The use of deepfakes (fake video or audio created with artificial intelligence) has become a new tool for fraudsters and extortionists. In this article, we look at how to spot deepfakes and protect yourself from potential extortion. We discuss the importance of education and awareness of these technologies, the development of security protocols, and the use of technical solutions to detect counterfeit material. This information will help strengthen defenses against one of the newest threats in cyberspace.
In this article, we analyze the strategies and techniques that can be used to detect and protect against these modern digital threats. The topic is particularly relevant given the rapid growth of artificial intelligence technologies that allow the creation of convincing fake videos and audio recordings. The article provides tips on detecting deepfakes and security measures that can help protect against these threats. In addition, the article discusses ways to counter extortion using deepfakes, including legal and technological aspects. The importance of awareness and readiness to respond to such threats is emphasized, especially for organizations and public figures. The article provides recommendations for reputation management and information security best practices that can protect against the effects of deepfake attacks and extortion.
Deepfakes are synthetic media created using machine learning algorithms, named after the deep learning techniques used in the creation process and the fake events they depict.
Deepfake techniques cross disciplines and fields from computer science and programming to visual effects, computer animation, and even neuroscience. They can be convincingly realistic and difficult to detect if done well and with sophisticated and powerful technology.
But at the end of the day, machine learning is a fundamental concept for data scientists, and as such offers an interesting area of research in the context of deep fakes and the predictive models used to create them. The learning methods, algorithmic frameworks, and synthetic output of these models offer insight into deep learning and data.
Earlier in 2021, the FBI issued a warning about the growing threat of synthetic content, which includes deepfakes, describing it as “a wide range of created or manipulated digital content that includes images, video, audio, and text.” People can create the simplest kinds of synthetic content using software like Photoshop. Deepfake attackers are becoming increasingly sophisticated, using technologies such as artificial intelligence (AI) and machine learning (ML). Now they can create realistic images and videos.
Remember that cybercriminals do cyber theft to make money. Ransomware is usually successful. So it was a logical step for them to use deepfakes as a new ransomware tool. In the traditional method of ransomware distribution, attackers launch a phishing attack using malware embedded in an attractive deep fake video. There is also a new way to use deep fakes. Criminals can show people or companies all kinds of illegal (but fake) behavior that can damage their reputation if the images become public. Pay the ransom and the videos will remain private.
In addition to ransomware, synthetic content is used in other ways. Criminals can use data and images as weapons to spread lies and deceive or extort employees, customers and others.
Attackers can use all three of these attack styles together or separately. Remember, fraud has been around for a long time. Phishing attacks are already quite ruthless in their attempts to trick users. However, defenders are not paying enough attention to the rise of AI/ML to spread disinformation and extortion tactics. Today, criminals can even use programs designed to create pornographic images from real photos and videos.
Users have already become victims of phishing attacks, so detecting deep phishing attempts has become even more difficult for ordinary users. It is important that security programs include cybersecurity training as a mandatory element. This training should provide information on how to distinguish fake messages from real ones.
This task may not be as difficult as it may seem. Phishing attack technology may be quite advanced, but it is not perfect. In one of the webinars, Raymond Lee, CEO of FakeNet.AI, and Etai Maor, Senior Director of Security Strategy at Cato Networks, explained that one of the key features for detecting fakes is the face, and especially the eyes. If the eyes look unnatural or the facial features don’t seem to move, it’s probably an altered image.
Another way to distinguish deepfakes from real information is to apply best practices in cybersecurity and adopt a philosophy of zero trust. It’s important to check all the data you receive. Double and even triple check the source of the message. If necessary, use image search to find the original if possible.
When it comes to your own images, use a digital signature or watermark to make them harder to forge.
In general, existing security systems can be applied to prevent phishing and social engineering attacks. Deepfakes are still in their early stages as an attack method, so cybersecurity teams have an advantage in preparing defenses as the tools to detect and defend against these attacks improve. It is important not to allow these threats to minimize our peace of mind.
For academics and data professionals interested in the impact of deepfake technology on private enterprise, government agencies, cybersecurity, and public safety, studying the methods of creating and detecting deepfakes can be extremely useful. Understanding these methods and the science behind them makes it easier to respond to the potential threats associated with the harmful use of synthetic media.
With the development of deep learning models, it becomes important to develop the skills and resources to detect and prevent the potential threat that comes from the malicious use of deepfakes. This can be an important task for researchers, companies and the public.
Government institutions and large corporations allocate significant funds to the development and improvement of deepfake detection systems. Such investments can help reduce the risks associated with the large-scale spread of false information and disinformation. The models being created by researchers like Thanh Thi Nguyen and his colleagues could be important tools for detecting and combating deepfakes in the future.
The first-order motion model is an interesting and advanced approach to image animation. This model is trained to reproduce motion based on input data and create animations that allow users to animate videos or create new scenes based on existing data.
The authors of the model taught her to “reconstruct” educational videos by combining one frame and the studied representation of movement in the video. This allows the model to understand how objects move in the video and use this information to create new frames or animations.
Dimitris Poulopoulos, a machine learning engineer, used this model to create interactive scripts and animations. He shared source code and use cases, allowing other users to experiment with the technology.
The application of this model can be diverse, from the creation of visual effects in movies and video games to the animation of media content. It is an essential tool for display creators and video editors looking for new ways to create engaging content.
Viewing the tooltips in a piece of media is a starting point, but it’s not enough. We also recommend running a side search to confirm or deny the accuracy of the video, something you can also do at home. According to a fact-checking guide by Mike Caulfield, a research fellow at the University of Washington’s Center for an Informed Public, lateral searching means reading “many related sites instead of digging deep into a specific site.” Open multiple tabs in your web browser to learn more about the claim, who is spreading it, and what other sources are saying about it.
Caulfield advises, “Go off the page and see what other authoritative sources have said about the site,” and pull together “different pieces of information from the Internet to get a better picture of the site.”
If Biden’s audio recording of the bank failure was real, the news would almost certainly have covered it. But when we searched, the results only included other social media posts sharing the clip or news articles debunking it. Nothing has confirmed that this is true.
Similarly, when PolitiFact found a video of DeSantis announcing his 2024 presidential run, no reliable news source confirmed it — something that would have happened if DeSantis had actually announced.
“It’s important to note, first of all, whoever is sharing this video, you know, look for a little bit of the origin of where this video was originally from,” Liu said. “If the message really matters to the audience, they should look for cross-validations.”
Fact checkers also use reverse image searches, which social media users can also do. Take screenshots of videos and upload them to sites like Google Images or TinEye. The results can reveal the original source of the video, whether it has been published in the past, and whether it has been edited. |
11.3.2. Optimizing regular expressions
program() filter functions and some other syslog-ng objects accept regular expressions as parameters. But evaluating general regular expressions puts a high load on the CPU, which can cause problems when the message traffic is very high. Often the regular expression can be replaced with simple filter functions and logical operators. Using simple filters and logical operators, the same effect can be achieved at a much lower CPU load. |
Website is down or not accessible
The web hosting platform is robust and maintains some basic safeguards against web attacks. One of those features is brute force login protection.
After three (3) failed login attempts, the IP address of the user is blocked for 60 minutes. That IP address can be all inclusive and block anyone from an office. However, after 60 minutes that block will be released and website access is available again. This feature will not block other website visitors from accessing the website. They will see a normally functioning website.
To test this feature out, try to visit the website that appears down on a mobile phone that is using it's own, separate data plan. The site should be viewable if the brute force protection was activated.
Another way to test is to visit https://downforeveryoneorjustme.com/ from the computer that is getting the website access error. This is an other test to see if the website is down for everyone or just the brute force protection that is enabled on the web hosting platform.
This is what the screen might look like if the brute force protection was triggered and is blocking access to the website: |
How to minimize the threat
Computerworld - Intrusion-detection systems (IDS) are the subject of industry controversy after a Gartner Inc. report recommended that companies abandon these systems in favor of firewalls (see story).
If organizations want to stop the constantly evolving types of attacks, they must continue to rely on multitiered defense strategies consisting of network security components layered at the perimeter and internal network machines and devices. Such network security components not only include network- and host-based IDSs, but antivirus software, patch management, firewalls, scanners and intrusion-prevention systems (IPS).
Admittedly, this approach has challenges: Systems are not adequately integrated, do not identify and share vulnerability information, and rely on numerous rules to identify new threats that in turn produce volumes of alerts all of which are overwhelming the system and its operators.
The main culprits are IDS/IPS technologies that are generally able to spot attacks by common vulnerabilities and exposures, or CVE, identifying that they see on a network. However, these same technologies generally don't have the ability to determine if the targeted machine is actually vulnerable to the attack.
For instance, if malicious code has been written as a Windows-based attack targeting a Windows vulnerability, is the destination IP address actually running Windows or a Unix variant? And, if it's Windows, is it vulnerable to the attack, or has it already been patched? An IDS doesn't have the intelligence to answer these questions and generates incident alerts indiscriminately. In addition, even if the targeted machine is vulnerable, an IDS doesn't have the capability to remediate it.
Furthermore, best-practice and government-compliance directives now require higher standards of network security and integrity to protect consumer privacy, and they must be documented with change-tracking and audit-trail reports.
|Brett Oliphant is the chief technology officer at Lafayette, Ind.-based SecurityProfiling Inc. |
Companies are finding it increasingly difficult and expensive, especially in an environment with rising security standards and policy compliance requirements, to mitigate new threats and manage numerous systems. But relying solely on firewalls isn't the answer. Vendors must create ways to integrate systems, share information intelligently to better defend against blended threats, reduce management and cost requirements, and automate IDS/IPS configuration and tuning along with vulnerability identification and remediation functionalities.
A first and important step in this process is to improve IDS/IPS to minimize false positives that threaten productivity and result in rising costs. This can be accomplished by integrating client configuration data from client agents or a scanner, which will provide the system with data so it can determine if the targeted machines are vulnerable to the attacks, thereby reducing false positives.
The following functions are achieved through this integration:
- Radicati: Cloud Business Email - Market Quadrant 2013 Google was named the top cloud business email provider in a recent report by research firm Radicati. Out of 14 key players, Google...
- Tablets in the Enterprise: A Checklist for Successful Deployment How can you enterprise manage and secure tablets in order to protect corporate data while providing access to the information and applications employees...
- Enterprise Mobility: A Checklist for Secure Containerization The advantages and disadvantages of the multiple approaches to containerization. Learn More>>
- Enterprise File Sync & Share Checklist File sync and share has changed the way people work and collaborate in today's tech-savvy world. Gone are the email roadblocks, clunky FTP...
- Live Webcast LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users?
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users?
- On-demand webinar: "Mobility Mayhem: Balancing BYOD with Enterprise Security" Check out this on-demand webinar to hear Sophos senior security expert John Shier deep dive into how BYOD impacts your enterprise security strategy... All Security White Papers | Webcasts |
One of the primary purposes of an object code license agreement is to establish the conditions under which the software can be used. These conditions may include restrictions on the number of users, limitations on the ability to modify or distribute the code, and requirements for compliance with applicable laws and regulations.
Another important aspect of an object code license agreement is the protection of the developer`s intellectual property rights. The agreement may include provisions that prohibit reverse engineering, decompiling, or disassembling of the software, as well as restrictions on the use of proprietary algorithms, libraries, or other elements of the code.
Object code license agreements can also include provisions for software support and maintenance, warranties, and liability limitations. These provisions can help developers manage their risks by limiting their exposure to potential legal actions and disputes.
When creating an object code license agreement, developers should work closely with qualified legal counsel to ensure that all applicable laws and regulations are taken into account. It is important that license agreements are drafted in a clear and concise manner that is easily understood by both parties.
In conclusion, object code license agreements provide software developers with the legal framework needed to protect their intellectual property rights and establish the terms under which their products can be used. These agreements are essential for software development and should be created with care to ensure that they are legally sound and enforceable. |
The unauthorized disclosure of sensitive data through any media or network can have disastrous results for an organization’s brand, reputation and standing with regulatory bodies. That’s why protecting sensitive data from accidental sharing or malicious actors is a priority for organizations today.
There are data security solutions that can, and should, work alongside each other to minimize these data loss risks and ensure sensitive data is only made available to authorized recipients.
Data Loss Prevention (DLP) solutions remove data security risks from email, web and at endpoints. Using metadata labelling, Data Classification improves the accuracy of the DLP solution as it understands the data and ensures the appropriate security policy is applied. This level of precision significantly reduces the number of false positives generated. Your sensitive data is kept secure and compliant.
HelpSystems offers the opportunity to combine data classification (Titus) and DLP (Clearswift) to ensure your sensitive data is well protected throughout its lifecycle. |
Classification Techniques Applied for Intrusion Detection
Intrusion Detection System (IDS) was designed to monitor the network activity and it identifies the normal and abnormal behavioral pattern in the network. If there was any abnormal pattern, it indicates the system is in attack by compromising the confidentiality, availability or integrity of the computer system. IDS perform three functions namely monitoring, detecting and responding for malicious activity. Experiment is based on kdd99 dataset to categorize normal and abnormal pattern. Goal of this paper is to compare three classification techniques by considering two classifiers from each technique and find out the best one based on the true positive, false positive and average accuracy. |
• : anything on the command-line not prefixed with a ´-´ is
assumed to be an IP address or range. There are three valid for‐
mats. The first is a single IPv4 address like "192.168.0.1". The
second is a range like "10.0.0.1-10.0.0.100". The third is a CIDR
address, like "0.0.0.0/0". At least one target must be specified.
Multiple targets can be specified. This can be specified as multi‐
ple options separated by space, or can be separated by a comma as a
single option, such as 10.0.0.0/8,192.168.0.1.
• --range : the same as target range spec described above,
except as a named parameter instead of an unnamed one.
• -p : specifies the port(s) to be scanned. A
single port can be specified, like -p80. A range of ports can be
specified, like -p 20-25. A list of ports/ranges can be specified,
like -p80,20-25. UDP ports can also be specified, like --ports
• --banners: specifies that banners should be grabbed, like HTTP
server versions, HTML title fields, and so forth. Only a few proto‐
cols are supported.
• --rate : specifies the desired rate for trans‐
mitting packets. This can be very small numbers, like 0.1 for
transmitting packets at rates of one every 10 seconds, for very
large numbers like 10000000, which attempts to transmit at 10 mil‐
lion packets/second. In my experience, Windows and can do 250 thou‐
sand packets per second, and latest versions of Linux can do 2.5
million packets per second. The PF_RING driver is needed to get to
25 million packets/second.
• -c , --conf : reads in a configuration file.
The format of the configuration file is described below.
• --resume : the same as --conf, except that a few options
are automatically set, such as --append-output. The format of the
configuration file is described below.
• --echo: don´t run, but instead dump the current configuration to a
file. This file can then be used with the -c option. The format of
this output is described below under ´CONFIGURATION FILE´.
• -e , --adapter : use the named raw network inter‐
face, such as "eth0" or "dna1". If not specified, the first network
interface found with a default gateway will be used.
• --adapter-ip : send packets using this IP address. If
not specified, then the first IP address bound to the network in‐
terface will be used. Instead of a single IP address, a range may
be specified. NOTE: The size of the range must be an even power of
2, such as 1, 2, 4, 8, 16, 1024 etc. addresses.
• --adapter-port : send packets using this port number as the
source. If not specified, a random port will be chosen in the range
40000 through 60000. This port should be filtered by the host fire‐
wall (like iptables) to prevent the host network stack from inter‐
fering with arriving packets. Instead of a single port, a range can
be specified, like 40000-40003. NOTE: The size of the range must be
an even power of 2, such as the example above that has a total of 4
• --adapter-mac : send packets using this as the source
MAC address. If not specified, then the first MAC address bound to
the network interface will be used.
• --router-mac : send packets to this MAC address as the
destination. If not specified, then the gateway address of the net‐
work interface will be ARPed.
• --ping: indicates that the scan should include an ICMP echo re‐
quest. This may be included with TCP and UDP scanning.
• --exclude : blacklist an IP address or range, preventing
it from being scanned. This overrides any target specification,
guaranteeing that this address/range won´t be scanned. This has the
same format as the normal target specification.
• --excludefile : reads in a list of exclude ranges, in the
same target format described above. These ranges override any tar‐
gets, preventing them from being scanned.
• --append-output: causes output to append to file, rather than over‐
writing the file.
• --iflist: list the available network interfaces, and then exits.
• --retries: the number of retries to send, at 1 second intervals.
Note that since this scanner is stateless, retries are sent regard‐
less if replies have already been received.
• --nmap: print help aobut nmap-compatibility alternatives for these
• --pcap-payloads: read packets from a libpcap file containing pack‐
ets and extract the UDP payloads, and associate those payloads with
the destination port. These payloads will then be used when sending
UDP packets with the matching destination port. Only one payload
will be remembered per port. Similar to --nmap-payloads.
• --nmap-payloads : read in a file in the same format as
the nmap file nmap-payloads. This contains UDP payload, so that we
can send useful UDP packets instead of empty ones. Similar to
• --http-user-agent : replaces the existing user-agent
field with the indicated value when doing HTTP requests.
• --open-only: report only open ports, not closed ports.
• --pcap : saves received packets (but not transmitted
packets) to the libpcap-format file.
• --packet-trace: prints a summary of those packets sent and re‐
ceived. This is useful at low rates, like a few packets per second,
but will overwhelm the terminal at high rates.
• --pfring: force the use of the PF_RING driver. The program will
exit if PF_RING DNA drvers are not available.
• --resume-index: the point in the scan at when it was paused.
• --resume-count: the maximum number of probes to send before exit‐
ing. This is useful with the --resume-index to chop up a scan and
split it among multiple instances, though the --shards option might
• --shards /: splits the scan among instances. x is the id for
this scan, while y is the total number of instances. For example,
--shards 1/2 tells an instance to send every other packet, starting
with index 0. Likewise, --shards 2/2 sends every other packet, but
starting with index 1, so that it doesn´t overlap with the first
The configuration file uses the same parameter names as on the command‐
line, but without the -- prefix, and with an = sign between the name
and the value. An example configuration file might be:
range = 10.0.0.0/8,192.168.0.0/16
range = 172.16.0.0/14
ports = 20-25,80,U:53
ping = true
adapter = eth0
adapter-ip = 192.168.0.1
router-mac = 66-55-44-33-22-11
exclude-file = /etc/masscan/exludes.txt
By default, the program will read default configuration from the file
/etc/masscan/masscan.conf. This is useful for system-specific settings,
such as the --adapter-xxx options. This is also useful for excluded IP
addresses, so that you can scan the entire Internet, while skipping
dangerous addresses, like those owned by the DoD, and not make an acci‐
When the user presses ctrl-c, the scan will stop, and the current state
of the scan will be saved in the file ´paused.conf´. The scan can be
resumed with the --resume option:
# masscan --resume paused.conf
The program will not exit immediately, but will wait a default of 10
seconds to receive results from the Internet and save the results be‐
fore exiting completely. This time can be changed with the --wait op‐
The following example scans all private networks for webservers, and
prints all open ports that were found.
# masscan 10.0.0.0/8 192.168.0.0/16 172.16.0.0/12 -p80 --open-only
The following example scans the entire Internet for DNS servers, grab‐
bing their versions, then saves the results in an XML file.
# masscan 0.0.0.0/0 --excludefile no-dod.txt -pU:53 --banners --output-filename dns.xml
You should be able to import the XML into databases and such.
The following example reads a binary scan results file called
bin-test.scan and prints results to console.
# masscan --readscan bin-test.scan
The following example reads a binary scan results file called
bin-test.scan and creates an XML output file called bin-test.xml.
# masscan --readscan bin-test.scan -oX bin-test.xml
Let´s say that you want to scan the entire Internet and spread the scan
across three machines. Masscan would be launched on all three machines
using the following command-lines:
# masscan 0.0.0.0/0 -p0-65535 --shard 1/3
# masscan 0.0.0.0/0 -p0-65535 --shard 2/3
# masscan 0.0.0.0/0 -p0-65535 --shard 3/3
An alternative is with the "resume" feature. A scan has an internal in‐
dex that goes from zero to the number of ports times then number of IP
addresses. The following example shows splitting up a scan into chunks
of a 1000 items each:
# masscan 0.0.0.0/0 -p0-65535 --resume-index 0 --resume-count 1000
# masscan 0.0.0.0/0 -p0-65535 --resume-index 1000 --resume-count 1000
# masscan 0.0.0.0/0 -p0-65535 --resume-index 2000 --resume-count 1000
# masscan 0.0.0.0/0 -p0-65535 --resume-index 3000 --resume-count 1000
A script can use this to split smaller tasks across many other ma‐
chines, such as Amazon EC2 instances. As each instance completes a job,
the script might send a request to a central coordinating server for
When scanning TCP using the default IP address of your adapter, the
built-in stack will generate RST packets. This will prevent banner
grabbing. There are are two ways to solve this. The first way is to
create a firewall rule to block that port from being seen by the stack.
How this works is dependent on the operating system, but on Linux this
looks something like:
# iptables -A INPUT -p tcp -i eth0 --dport 61234 -j DROP
Then, when scanning, that same port must be used as the source:
# masscan 10.0.0.0/8 -p80 --banners --adapter-port 61234
An alternative is to "spoof" a different IP address. This IP address
must be within the range of the local network, but must not otherwise
be in use by either your own computer or another computer on the net‐
work. An example of this would look like:
# masscan 10.0.0.0/8 -p80 --banners --adapter-ip 192.168.1.101
Setting your source IP address this way is the preferred way of running
This scanner is designed for large-scale surveys, of either an organi‐
zation, or of the Internet as a whole. This scanning will be noticed by
those monitoring their logs, which will generate complaints.
If you are scanning your own organization, this may lead to you being
fired. Never scan outside your local subnet without getting permission
from your boss, with a clear written declaration of why you are scan‐
The same applies to scanning the Internet from your employer. This is
another good way to get fired, as your IT department gets flooded with
complaints as to why your organization is hacking them.
When scanning on your own, such as your home Internet or ISP, this will
likely cause them to cancel your account due to the abuse complaints.
One solution is to work with your ISP, to be clear about precisely what
we are doing, to prove to them that we are researching the Internet,
not "hacking" it. We have our ISP send the abuse complaints directly to
us. For anyone that asks, we add them to our "--excludefile", black‐
listing them so that we won´t scan them again. While interacting with
such people, some instead add us to their whitelist, so that their
firewalls won´t log us anymore (they´ll still block us, of course, they
just won´t log that fact to avoid filling up their logs with our
Ultimately, I don´t know if it´s possible to completely solve this
problem. Despite the Internet being a public, end-to-end network, you
are still "guilty until proven innocent" when you do a scan.
masscan is an Internet-scale port scanner, useful for large scale sur‐
veys of the Internet, or of internal networks. While the default trans‐
mit rate is only 100 packets/second, it can optional go as fast as 25
million packets/second, a rate sufficient to scan the Internet in 3
minutes for one port. |
The S4b virus falls under the Phobos ransomware family. Malware of such sort encrypts all the data on your PC (images, documents, excel sheets, audio files, videos, etc) and adds its specific extension to every file, leaving the info.txt files in every folder with the encrypted files.
S4b virus: what is known so far?
☝️ S4b is a Phobos family ransomware malicious agent.
The renaming will be done by this scheme: id[xxxxx].[contact-email].s4b. In the process of encryption, a file named, for example, “report.docx” will be altered to “report.docx.id[9ECFA84E-3449].[[email protected]].s4b”.
In each folder containing the encoded files, a info.txt file will be found. It is a ransom money note. It contains information on the ways of paying the ransom and some other remarks. The ransom note most probably contains instructions on how to buy the decryption tool from the S4b developers. You can get this decoding tool after contacting [email protected] via email. That is it.
|Ransomware family1||Phobos ransomware|
|Detection||Win32/Injector.CNJW Virus Removal, Win32/Patched.IP Virus Removal, Worm:Win32/Duptwux.A Virus Removal|
|Symptoms||Your files (photos, videos, documents) get a .s4b extension and you can’t open them.|
|Fix Tool||See If Your System Has Been Affected by S4b virus|
The info.txt file coming in package with the S4b ransomware provides the following discouraging information:
!!!All of your files are encrypted!!! To decrypt them send e-mail to this address: [email protected]. If we don\'t answer in 24h., send e-mail to this address: [email protected]
In the screenshot below, you can see what a folder with files encrypted by the S4b looks like. Each filename has the “.s4b” extension appended to it.
How did my machine catch S4b ransomware?
There are many possible ways of ransomware injection.
There are currently three most popular ways for tamperers to have the S4b virus working in your system. These are email spam, Trojan introduction and peer file transfer.
- If you access your inbox and see emails that look just like notifications from utility services providers, delivery agencies like FedEx, web-access providers, and whatnot, but whose mailer is unknown to you, beware of opening those letters. They are very likely to have a harmful item enclosed in them. So it is even more dangerous to download any attachments that come with emails like these.
- Another option for ransom hunters is a Trojan virus model. A Trojan is a program that infiltrates into your PC disguised as something legal. For instance, you download an installer of some program you want or an update for some software. However, what is unboxed turns out to be a harmful agent that corrupts your data. Since the update wizard can have any title and any icon, you’d better be sure that you can trust the resource of the things you’re downloading. The optimal thing is to use the software companies’ official websites.
- As for the peer-to-peer file transfer protocols like BitTorrent or eMule, the danger is that they are even more trust-based than the rest of the Internet. You can never know what you download until you get it. Our suggestion is that you use trustworthy resources. Also, it is a good idea to scan the folder containing the downloaded objects with the antivirus as soon as the downloading is complete.
How do I get rid of ransomware?
It is important to note that besides encrypting your files, the S4b virus will most likely install Vidar Stealer on your PC to get access to credentials to different accounts (including cryptocurrency wallets). The mentioned spyware can extract your logins and passwords from your browser’s auto-filling cardfile.
Remove S4b with Gridinsoft Anti-Malware
We have also been using this software on our systems ever since, and it has always been successful in detecting viruses. It has blocked the most common Ransomware as shown from our tests with the software, and we assure you that it can remove S4b as well as other malware hiding on your computer.
To use Gridinsoft for remove malicious threats, follow the steps below:
1. Begin by downloading Gridinsoft Anti-Malware, accessible via the blue button below or directly from the official website gridinsoft.com.
2.Once the Gridinsoft setup file (setup-gridinsoft-fix.exe) is downloaded, execute it by clicking on the file.
3.Follow the installation setup wizard's instructions diligently.
4. Access the "Scan Tab" on the application's start screen and launch a comprehensive "Full Scan" to examine your entire computer. This inclusive scan encompasses the memory, startup items, the registry, services, drivers, and all files, ensuring that it detects malware hidden in all possible locations.
Be patient, as the scan duration depends on the number of files and your computer's hardware capabilities. Use this time to relax or attend to other tasks.
5. Upon completion, Anti-Malware will present a detailed report containing all the detected malicious items and threats on your PC.
6. Select all the identified items from the report and confidently click the "Clean Now" button. This action will safely remove the malicious files from your computer, transferring them to the secure quarantine zone of the anti-malware program to prevent any further harmful actions.
8. If prompted, restart your computer to finalize the full system scan procedure. This step is crucial to ensure thorough removal of any remaining threats. After the restart, Gridinsoft Anti-Malware will open and display a message confirming the completion of the scan.
Remember Gridinsoft offers a 6-day free trial. This means you can take advantage of the trial period at no cost to experience the full benefits of the software and prevent any future malware infections on your system. Embrace this opportunity to fortify your computer's security without any financial commitment.
Trojan Killer for “S4b” removal on locked PC
In situations where it becomes impossible to download antivirus applications directly onto the infected computer due to malware blocking access to websites, an alternative solution is to utilize the Trojan Killer application.
There is a really little number of security tools that are able to be set up on the USB drives, and antiviruses that can do so in most cases require to obtain quite an expensive license. For this instance, I can recommend you to use another solution of GridinSoft - Trojan Killer Portable. It has a 14-days cost-free trial mode that offers the entire features of the paid version. This term will definitely be 100% enough to wipe malware out.
Trojan Killer is a valuable tool in your cybersecurity arsenal, helping you to effectively remove malware from infected computers. Now, we will walk you through the process of using Trojan Killer from a USB flash drive to scan and remove malware on an infected PC. Remember, always obtain permission to scan and remove malware from a computer that you do not own.
Step 1: Download & Install Trojan Killer on a Clean Computer:
1. Go to the official GridinSoft website (gridinsoft.com) and download Trojan Killer to a computer that is not infected.
2. Insert a USB flash drive into this computer.
3. Install Trojan Killer to the "removable drive" following the on-screen instructions.
4. Once the installation is complete, launch Trojan Killer.
Step 2: Update Signature Databases:
5. After launching Trojan Killer, ensure that your computer is connected to the Internet.
6. Click "Update" icon to download the latest signature databases, which will ensure the tool can detect the most recent threats.
Step 3: Scan the Infected PC:
7. Safely eject the USB flash drive from the clean computer.
8. Boot the infected computer to the Safe Mode.
9. Insert the USB flash drive.
10. Run tk.exe
11. Once the program is open, click on "Full Scan" to begin the malware scanning process.
Step 4: Remove Found Threats:
12. After the scan is complete, Trojan Killer will display a list of detected threats.
13. Click on "Cure PC!" to remove the identified malware from the infected PC.
14. Follow any additional on-screen prompts to complete the removal process.
Step 5: Restart Your Computer:
15. Once the threats are removed, click on "Restart PC" to reboot your computer.
16. Remove the USB flash drive from the infected computer.
Congratulations on effectively removing S4b and the concealed threats from your computer! You can now have peace of mind, knowing that they won't resurface again. Thanks to Gridinsoft's capabilities and commitment to cybersecurity, your system is now protected.
Sometimes racketeers would decode several of your files to prove that they do have the decryption tool. As S4b virus is a relatively recent ransomware, safety measures engineers have not yet found a method to undo its work. However, the decoding tools are constantly upgraded, so the effective countermeasure may soon be available.
Of course, if the criminals succeed in encoding someone’s critical files, the hopeless person will probably comply with their demands. However, paying to racketeers does not necessarily mean that you’re getting your data back. It is still risky. After receiving the ransom, the racketeers may deliver a wrong decryption code to the victim. There were reports of malefactors simply vanishing after getting the ransom without even writing back.
The best safety measure against ransomware is to have aan OS restore point or the copies of your essential files in the cloud storage or at least on an external storage. Obviously, that might be insufficient. Your most important thing could be that one you were working on when it all happened. Nevertheless, it is something. It is also reasonable to scan your PC for viruses with the antivirus program after the system restoration.
S4b is not the only ransomware of its kind, since there are other specimens of ransomware out there that act in the same manner. For instance, Magaskosh, Nworansom, Rzew, and some others. The two main differences between them and the S4b are the ransom amount and the method of encryption. The rest is the same: documents become encoded, their extensions altered, ransom notes emerge in each directory containing encrypted files.
Some fortunate users were able to decode the arrested files with the help of the free tools provided by anti-ransomware experts. Sometimes the racketeers accidentally send the decryption code to the wronged in the ransom note. Such an epic fail allows the injured part to restore the files. But naturally, one should never rely on such a chance. Make no mistake, ransomware is a tamperers’ technology to pull the money out of their victims.
How to avert ransomware infiltration?
S4b ransomware has no endless power, neither does any similar malware.
You can armour yourself from its infiltration within three easy steps:
- Ignore any letters from unknown mailers with unknown addresses, or with content that has nothing to do with something you are expecting (how can you win in a lottery without participating in it?). If the email subject is likely something you are waiting for, scrutinize all elements of the questionable letter carefully. A hoax letter will surely have a mistake.
- Never use cracked or unknown software. Trojan viruses are often spreaded as an element of cracked products, most likely under the guise of “patch” preventing the license check. Understandably, dubious programs are difficult to tell from reliable software, because trojans may also have the functionality you seek. You can try to find information on this software product on the anti-malware forums, but the optimal solution is not to use such software.
- And finally, to be sure about the safety of the objects you downloaded, check them with GridinSoft Anti-Malware. This software will be a perfect defense for your PC.
Frequently Asked Questions
🤔 Can I somehow access “.s4b” files?
Unfortunately, no. You need to decipher the “.s4b” files first. Then you will be able to open them.
🤔 What should I do to make my files accessible as fast as possible?
Hopefully, you have made a copy of those important files. If not, there is still a function of System Restore but it needs a Restore Point to be previously saved. The rest of the methods require patience.
🤔 Will GridinSoft Anti-Malware remove all the encrypted files alongside the S4b virus?
Absolutely not! Your encrypted files are no threat to your PC.
GridinSoft Anti-Malware only deals with actual threats. The malware that has infiltrated your PC is must be still active and launching checks periodically to encode any new files you might create on your PC after the attack. As it has already been said, the S4b virus does not come alone. It installs backdoors and keyloggers that can steal your account credentials and provide criminals with easy access to your system after some time.
🤔 What should I do if the S4b virus has blocked my PC and I can’t get the activation code.
If that happened, you need to prepare a flash memory card with a pre-installed Trojan Killer. Use Safe Mode to perform the procedure. You see, the ransomware starts automatically as the system boots and encodes any new files created or imported into your system. To stop this function – use Safe Mode, which allows only the vital programs to run upon system start. Consider reading our manual on running Windows in Safe Mode.
🤔 What can I do right now?
Many of the encrypted files might still be at your disposal
- If you sent or received your critical files by email, you could still download them from your online mailbox.
- You might have shared images or videos with your friends or family members. Just ask them to give those pictures back to you.
- If you have initially downloaded any of your files from the Web, you can try downloading them again.
- Your messengers, social networks pages, and cloud disks might have all those files too.
- It might be that you still have the needed files on your old computer, a laptop, cellphone, external storage, etc.
HINT: You can use file recovery utilities2 to get your lost data back since ransomware encodes the copies of your files, deleting the original ones. In the tutorial below, you can see how to use PhotoRec for such a restoration, but be advised: you won’t be able to do it before you remove the virus with an antivirus program.
I need your help to share this article.
It is your turn to help other people. I have written this article to help people like you. You can use the buttons below to share this on your favorite social media Facebook, Twitter, or Reddit.Brendan Smith
How to Remove S4B Ransomware & Recover PC
Name: S4B Virus
Description: S4B Virus is a ransomware-type infections. This virus encrypts important personal files (video, photos, documents). The encrypted files can be tracked by a specific .s4b extension. So, you can't use them at all.
Operating System: Windows
Application Category: Virus
User Review( votes) |
by Christoph Schmittner, Zhendong Ma, Thomas Gruber and Erwin Schoitsch (AIT)
Connected, intelligent, and autonomous vehicles pose new safety and security challenges. A systematic and holistic safety and security approach is a key to addressing these challenges. Safety and security co-engineering in the automotive domain considers the coordination and interaction of the lifecycles, methodologies, and techniques of the two disciplines, as well as the development of corresponding standards.
Connected, intelligent, and autonomous vehicles transform traditionally mechanical and electrical cars into ‘networked computers on wheels’. Along with the many technology breakthroughs and benefits, challenges of safety and security become imminent and real. The electrical and electronic systems that control an automated vehicle are no longer immune to cyberattacks commonly seen in IT systems. A combined safety and security approach is necessary to address the challenges that have arisen in recent years, including co-engineering activities, methodologies, techniques, and a coherent approach in relevant standards.
The correct identification of safety and security goals is the first step in the development lifecycle of a system. The identification of hazards and assets reveals potentially vulnerable parts of a system. Subsequently, a first concept architecture for these parts is defined which can then be analysed to identify potential weaknesses, e.g., if a failure or an attack could trigger an intolerable risk. In both cases, requirements are defined which aim at preventing such risks. Different methods should be used to address various issues during the development lifecycle with different levels of detail. Figure 1 displays the most common methods in the respective phases of the V-model.
Figure 1: Dependability engineering in the development lifecycle.
A useful security technique is threat modelling, which defines a theoretical model of perceived threats to a system. We developed a systematic approach to apply threat modelling to automotive security analysis and combined it with the Failure Mode and Effect Analysis to FMVEA, Failure Modes, Vulnerabilities and Effects Analysis . Threat modelling should be performed in all phases of the development lifecycle. Different levels of detail can be used along the lifecycle with different objectives in each phase. In the concept phase, modelling results in high-level security and safety requirements and security concepts. In the product development phase, it can define technical security and safety requirements for functional, security and safety design. It can also be used to discover design vulnerability and flaws and to specify comprehensive requirements that can be verified and validated in unit and integration testing in an iterative way in parallel to system design and implementation. In the production and operation phase, it prioritises risks and prepares penetration testing on completed automotive components and systems. A knowledge base is continuously enriched by the output from threat and failure modelling activities, enabling the reuse of artefacts across different projects. Further, related vulnerabilities and threats from external sources are promptly incorporated into the threat and mitigation catalogue.
Figure 2: Iterative threat modelling and mitigation during the development lifecycle.
A new version of the automotive functional safety standard ISO 26262 is currently under development. Although not suited for completely autonomous cars, automated functions are considered to a significantly higher degree than in the first edition. This is supported by the new standard development SotIF (Safety of the Intended Functionality). SotIF describes nominal performance metrics for sensor systems and automated functions. This regulates the area where a system may cause a hazard without a failure in the traditional understanding. The processing algorithm made, based on the received understanding of the environment, a hazardous decision without a fault in the system. This could be caused by a limitation in the sensor algorithm or signal noise or insufficient performance of a sensor. A new ISO/SAE automotive security standard completes these activities. All three standards need to consider the increased interaction and co-engineering between system-, safety- and security- engineers. AIT is involved in the development of all three standards and is a member of the cybersecurity and safety task group that developed the Annex. The goal of interaction and communication points is to lay the groundwork for a workflow with shared phases .
Figure 3: SotIF approach combined with safety and cybersecurity co-engineering.
AIT has further developed automotive safety and security co-engineering in the Artemis project EMC2 and demonstrated it for a Hybrid Electric Powertrain control system of AVL. In the SCRIPT project, we applied the newly published SAE J3061 standard for conducting TARA in the development of a secure communication gateway for autonomous off-road vehicles . We are also working towards an efficient and model-based approach to multi-concern assurance including safety; security, reliability, and availability in the scope of ECSEL project AMASS. AIT will take the next step towards safe, secure and cost-efficient automated driving in the ECSEL project AUTODRIVE starting in 2017. The interaction point approach of ISO 26262 Edition 2 will be the object of research in the ECSEL project AQUAS also starting in 2017.
C. Schmittner, Z. Ma, E. Schoitsch, T. Gruber: “A Case Study of FMVEA and CHASSIS as Safety and Security Co-Analysis Method for Automotive Cyber-physical Systems,” in 1st ACM Workshop on Cyber-Physical System Security, Apr. 2015, ACM, pp. 69-80.
E. Schoitsch, C. Schmittner, Z. Ma, T. Gruber: “The Need for Safety & Cyber-Security Co-engineering and Standardization for Highly Automated Automotive Vehicles”, AMAA 2015, Berlin, Germany, July 2015.
C. Schmittner, et al.: “Using SAE J3061 for Automotive Security Requirement Engineering”, in International Conference on Computer Safety, Reliability, and Security, pp. 157-170. Springer, 2016.
Christoph Schmittner, Zhendong Ma, Thomas Gruber, Erwin Schoitsch |
Quickstart and first steps
How to use the Demo Installer to quickly set up a Search Guard PoC for Elasticsearch and Kibana. Use the Kibana config GUI to add users, roles and permissions.
Architecture and Request Flow
A high-level view on the Search Guard architecture and the request flow. This presentation describes the main concepts of Search Guard and how security is implemented.
This deck describes how the Search Guard configuration for users, roles and permissions is structured, and explains how to apply configuration changes.
Active Directory & LDAP
How to connect Search Guard to an Active Directory or LDAP server, and how to configure authentication and authorization.
JSON web tokens
How to use JSON web tokens for Elasticsearch single sign on authentication.
Use Search Guard audit logging to track access to your cluster and to stay compliant with regulations like PCI, HIPAA, SOX, GDPR and ISO.
Document- and Field-Level Security
How to apply fine-grained access control to documents and fields in indices. Filter documents and filter or anonymize fields based on the users roles.
VPNs and firewalls are the norms, but perimeter security is not enough anymore. The Zero Trust Security model moves access control mechanisms from the network perimeter to the actual users, devices, and systems. |
A Novel Method Based on Clustering Algorithm and SVM for Anomaly Intrusion Detection of Wireless Sensor Networks
Based on the principle that the same class is adjacent, an anomaly intrusion detection method based on K-means and Support Vector Machine (SVM) is presented. In order to overcome the disadvantage that k-means algorithm requires initializing parameters, this paper proposes an improved K-means algorithm with a strategy of adjustable parameters. According to the location of wireless sensor networks (WSN), we can obtain clustering results by applying improved K-means algorithm to WSN, and then SVM algorithm is applied to different clusters for anomaly intrusion detection. Simulation results show that the proposed method can detect abnormal behaviors efficiently and has high detection rate and low false positive rate than the current typical intrusion detection schemes of WSN.
Dongye Sun, Wen-Pei Sung and Ran Chen
Z. H. Xiao et al., "A Novel Method Based on Clustering Algorithm and SVM for Anomaly Intrusion Detection of Wireless Sensor Networks", Applied Mechanics and Materials, Vols. 121-126, pp. 3745-3749, 2012 |
The program provides protection against various types of viruses (polymorphic and self-encrypting, scripting, macro viruses), and thanks to heuristic analysis, the program monitors, limits and blocks suspicious activities performed by applications. Three basic methods are used to detect threats: signature-based, proactive and heuristic. The program is also equipped with a module that allows detecting and blocking network attacks.
The MailChecker module used in the program, which acts as an anti-spam filter, enables cooperation with all the most popular e-mail clients working on the basis of SMTP and POP3.
The trial version allows you to use the program for a period of 30 days. |
The growing complexity of interactions between computers and networks makes the subject of network security a very interesting one. As our dependence on the services provided by computing networks grows, so does our investment in such technology. In this situation, there is a greater risk of occurrence of targeted malicious attacks on computers and networks, which could result in system failure. At the user level, the goal of network security is to prevent any malicious attack by a virus or a worm. However, at the network level, total prevention of such malicious attacks is an impossible and impractical objective to achieve. A more attainable objective would be to prevent the rampant proliferation of a malicious attack that could cripple the entire network.
Traditional Intrusion Detection Systems (IDSs) focus on the detection of attacks at the individual nodes, after a malicious code has entered individual machines in a network. However, repeated failures of conventional IDSs have led researchers to develop methods that integrate detection systems in networks and use their collective intelligence to defend against malicious attacks. Such approaches utilize the synergistic power generated by the network, as nodes share prior and current knowledge of detected attacks and related information with other nodes.
This dissertation investigates the practical application of a cooperative approach, used to defend computer networks against attacks from external agents. In this dissertation I focus on the detection of metamorphic NOP (No OPeration) sleds, which are common in buffer overflow attacks, and the role of topology on the rate of spread of a malicious attack. The aim of this study is to use the results to provide recommendations that can be utilized to develop optimal network security policies.
|Commitee:||Berg, George, Goel, Sanjay|
|School:||State University of New York at Albany|
|School Location:||United States -- New York|
|Source:||DAI-B 71/05, Dissertation Abstracts International|
|Subjects:||Information science, Computer science|
|Keywords:||Intrusion detection, Malicious attacks, Network security, Network topology, Polymorphic attacks|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be |
The large-scale journey to the cloud has fundamentally reshaped the digital business and the traditional paradigm of the network perimeter. Hybrid infrastructure and distributed workers are now a part of the furniture of an increasingly diverse digital estate, with multi-cloud practices introducing a new layer of complexity that most organizations are ill-equipped to address.
In the cloud, security teams not only struggle with a lack of visibility and control, but also diverse and incompatible defenses that often lead to overly relaxed permissions and simple mistakes. This traditional ‘stovepipe’ approach to security is rarely robust and unified enough to provide sufficient coverage, relying on static and siloed methods that fail to detect compromised credentials, insider threats, and critical misconfigurations.Less than 1/3 of businesses are monitoring abnormal workforce behavior across their cloud footprint. This is alarming considering the significant increase in usage of cloud apps and collaboration platforms.
Darktrace’s Cyber AI Platform fills these gaps with self-learning AI that understands ‘normal’ at every layer, dynamically analyzing the dispersed and unpredictable behaviors that show up in email, cloud, and the corporate network. This unified scope allows the system to spot subtle deviations indicative of a threat – from an unusual resource creation or open S3 bucket in AWS, to suspicious data movement in Salesforce, to a new inbox rule or strange login location in Microsoft 365.
Unlike policy-based controls, the immune system understands the human behind every trusted account in the cloud, providing a unified detection engine that can correlate the weak and subtle signals of an advanced attack.
For inquiries, email us at [email protected] |
Learn how to track system intruders with honeypots and resource-integrity tools. Honeypots can lure attackers so that you can study their methods of operation, and resource-integrity tools can alert you to changes in files or other system resources.
Many organizations today are exploring adoption of Windows 10. Often touted as the last version of Windows, it is now a constantly evolving Windows as a Service solution. In this one-day training, you'll find out what this new model for Windows really means to your organization and what the benefits are once you've made the move to Windows 10. |
Http Error 201
The client MAY repeat the request without modifications at any later time. Hypertext Transfer Protocol – HTTP/1.0. Note that together with this response, a user-friendly page explaining the problem should be sent. Retrieved May 21, 2009. ^ Cohen, Josh. "HTTP/1.1 305 and 306 Response Codes". this content
If the only issue is in response code, i think you should just change your if statement to also check against 201 code –Olegas Mar 29 '11 at 9:26 Please suggest me where iam going wrong! If the item was not created for some reason, then perhaps the status URI would return a 410 Gone response. To prevent this the server may return a 102 (Processing) status code to indicate to the client that the server is still processing the method. https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
Http Status Codes Cheat Sheet
Transparent Content Negotiation in HTTP. However, this specification does not define any standard for such automatic selection. The client MAY repeat the request if it adds a valid Content-Length header field containing the length of the message-body in the request message. 10.4.13 412 Precondition Failed The precondition given As an example of its use, however, Apple's MobileMe service generates a 402 error ("httpStatusCode:402" in the Mac OS X Console log) if the MobileMe account is delinquent. 403 Forbidden The
RFC 1945. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do When a server is under attack or just receiving a very large number of requests from a single party, responding to each with a 429 status code will consume resources. Http 422 Client error responses 400 Bad Request This response means that server could not understand the request due to invalid syntax. 401 Unauthorized Authentication is needed to get requested response.
For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features. The range header is used by HTTP clients to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams. 207 Multi-Status (WebDAV; RFC 4918) The message body that Retrieved 2016-10-12. this website A user agent may automatically redirect a request.
It would eventually be unavailable after the resource has been created or rejected, but the time frame for this is up to you. Http 403 The response SHOULD contain an entity describing why that version is not supported and what other protocols are supported by that server. Wikipedia Many HTTP clients (such as Mozilla and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons. 306 (Unused) The 306 status code was used However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method.
Http Response Example
Since HTTP/1.0 did not define any 1xx status codes, servers MUST NOT send a 1xx response to an HTTP/1.0 client except under experimental conditions. http://www.restapitutorial.com/httpstatuscodes.html The server returns no information to the client and closes the connection (useful as a deterrent for malware). 449 Retry With (Microsoft) Wikipedia A Microsoft extension. Http Status Codes Cheat Sheet Wikipedia As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. Http Error Wordpress Retrieved 16 October 2015. ^ "HTTP Error 505 - HTTP version not supported".
Therefore, HTTP/1.1 added status codes 303 and 307 to distinguish between the two behaviours. news Internet Engineering Task Force. However, some Web applications and frameworks use the 302 status code as if it were the 303. 303 See Other The response to the request can be found under a different This response is cacheable unless indicated otherwise. Http Code 302
IETF. Retrieved October 24, 2009. ^ Nielsen, Henrik Frystyk; Leach, Paul; Lawrence, Scott (February 2000). The protocol SHOULD be switched only when it is advantageous to do so. have a peek at these guys Note: Some sites issue HTTP 401 when an IP address is banned from the website (usually the website domain) and that specific address is refused permission to access a website. 402
current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. Http 404 Wikipedia The request is larger than the server is willing or able to process. 414 Request-URI Too Long The server is refusing to service the request because the Request-URI is longer Wikipedia The server detected an infinite loop while processing the request (sent in lieu of 208). 509 Bandwidth Limit Exceeded (Apache) Wikipedia This status code, while used by many servers, is
The request might or might not be eventually acted upon, and may be disallowed when processing occurs. 203 Non-Authoritative Information (since HTTP/1.1) The server is a transforming proxy (e.g.
This has the same semantic than the 302 Found HTTP response code, with the exception that the user agent must not change the HTTP method used: if a POST was used JSEND) are not used and nothing is in the body (e.g. ArcGIS Server SOAP SDK. ^ "HTTP Error Codes and Quick Fixes". Http 502 RFC 2295.
The temporary URI SHOULD be given by the Location field in the response. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead. Content developers should be aware that there might be clients that implement such a fixed limitation. check my blog User agents should display any included entity to the user. 400 Bad Request The server cannot or will not process the request due to an apparent client error (e.g., malformed request
Internet Engineering Task Force. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. Wikipedia The client has asked for a portion of the file, but the server cannot supply that portion. For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format. 416 Range Not Satisfiable (RFC 7233) The client has asked for a |
"Okay, I hear a lot about these computer viruses but what do they actually look like?" – goes one of the most frequently asked questions we get. We have been working on some visualizations projects trying to answer that. We have mentioned our efforts in graphing malware earlier. The latest attempt is a 3D animation that visualizes the structure and execution of the W32/Bagle.AG@mm worm.
The boxes in the picture are functions of the worm. The one on the top is the "main" where the execution starts. The first ring contains all the functions that "main" calls. The second all the functions that the ones on the first ones call and so on. All connecting lines represent the calls from one function to the other. Red boxes belong to the virus code while the blue ones are API calls library code that do not belong to the malicious code. |
This threat model is the deliverable of one of the finalists of our Spring 2023 Hackathon.
The team is tasked with threat modeling a rideshare app based on this use case. They used a combination of STRIDE and LINDDUN GO as their primary methodologies.
- Andrew Morehouse (
@morehouse_hacks), Security and Compliance Analyst, Decisions, VA, US
- Chris Ramirez (
@cramirez), Principle Software Security Engineer, Axway, AZ, US
- Duncan Hopewell (
@n1ffl3r), Application Security Engineer, Kubota North America, TX, US
- Nandita Rao Narla (
@Nandita), Head of Technical Privacy and Governance, DoorDash, CA, US
Which threat modeling framework(s) used and why
We utilized STRIDE and LINDDUN GO, mostly because those were the models that the team had familiarity with.
Diagramming or thinking tool(s) used and why
LucidChart, because it was web based and allowed live collaboration.
What is the scope of the threat model?
What are the top 3 threats you identified? What are the mitigation plans?
- Abuse of the machine learning-driven access and pickup points. In order to mitigate, allow for user overrides of points and use that as feedback to detect poor.
- The collection of personal documents leads to significant personal information exposure. In order to mitigate, minimize the amount of time that personal documents are stored, and minimize the linkability of these documents or other personal information. In particular, limit the linkability within the analytics system.
- Denial of service against the demand service. The demand service uses web sockets, a resource-intensive technology, to have bi-directional communications. There isn’t any indication of a protective technology there that would prevent spamming. Mitigation is simply adding protective technology to the data flow.
How did you prioritize the threats for mitigation?
After enumerating threats, each threat was given a likelihood and impact rating to determine inherent risk. Additionally, each threat was rated for a level of effort for mitigating those threats. We then looked for the most risk reduction with the least effort.
How did you evaluate your threat model?
After completing the threat model, we looked at our overall process, identified where we struggled, and figured out how to characterize that in the retrospective section of our threat model.
If you were to give this threat model to the developer team, what do you think their reaction would be? How valuable do you think they’d find it?
The hope is always that the developer team will find it enlightening and useful. In this case, there were some limitations in the documentation provided that could have been solved with more developer interaction. The real value would have been found in building those relationships with the developers during the process and bringing their understanding into the final deliverable. They will likely be surprised, but hopefully, we have worded our threat model in such a way that it isn’t seen as an attack on their system.
What did you learn from this hackathon that will make you change/upgrade/refine the way you do threat modeling today?
Several of us on the team didn’t know that LINDDUN/LINDDUN GO existed. That really gave an artful way for us to include privacy concerns during the threat modeling process.
About the creators
The creators of this threat model is a team of one of the finalists of our Spring 2023 Hackathon. Let’s hear about their hackathon story.
Who are on your team?
Andrew Morehouse, Security and Compliance Analyst, Decisions, @VA, US
Chris Ramirez, Principle Software Security Engineer, Axway, @AZ, US
Duncan Hopewell, Application Security Engineer, Kubota North America, @TX, US (Captain)
Nandita Rao Narla, Head of Technical Privacy and Governance, DoorDash, @CA, US
How did you work together?
Because we are all busy professionals, in different time zones, with different familial demands, we needed to figure out a way to work asynchronously with each other. We were able to accomplish that via discussions in Slack, tracking progress in Trello, and accumulating results in Google Docs and Sheets.
What was the biggest challenge you faced? How did you overcome it?
“I believe the biggest challenge was making communication work despite differences in time zones. We overcame it by utilizing Slack and Google Drive to communicate with one another. For me, an aha moment was after looking at a diagram that Duncan had put together and realizing where I could begin the enumeration of vulnerabilities.” - Andrew Morehouse
About Threat Model Collections
In this content series, we’ll publish and curate different threat model examples. The threat models can take many forms, such as graphical, textual representations or code. The models use diverse technologies, methodologies and techniques. |
Because of the growing demand for mobile apps and the shorter development cycles used by app development businesses, its security risks are frequently overlooked. According to Ponemon Institute research, 56% of security companies are unsure whether the application they designed will pass a security examination. With only a small portion of an organization’s resources dedicated to application security, we may see more app security flaws emerge from the applications they produce.
As a result, it’s critical to be aware of the security flaws in the technology you’re utilizing to build your app. According to a research, the likelihood of React security problems being undiscovered grows exponentially with each new upgraded version of React or update to random libraries. As a result, knowing about React’s fundamental security issues is even more crucial for react developers.
Vulnerabilities In Cybersecurity That You Should Be Aware Of:
1. Cross-Site Scripting (Cross-Site Scripting):
is a technique for React is preferred above other frameworks and libraries because of its universal rendering feature. Unfortunately, it’s also why it’s vulnerable to cross-site scripting assaults. To find security flaws in applications, attackers utilize complex automated scripts and crawlers. Once the vulnerability has been discovered, the cybercriminal will attempt to steal confidential information from a website through script injection. They aim to insert harmful code into your react application code, but there are techniques to safeguard your React app from cross-site scripting assaults.
Use API createElement() because it can automatically detect malicious code injection
Harness the power of JSX and benefit from auto escaping functionality to secure applications
2. SQL and CSV Injection:
SQL injection is a sort of attack and web security flaw that updates data without the user’s knowledge. To extract data from the database, SQL code execution is required. It lets attackers to create new credentials, imitate authentic ones, and gain access to admin accounts, allowing them to access the system. SQL injections come in a variety of forms and shapes. The following are some of the most frequent SQL injection attacks that target React applications:
Time-based SQL injections
Error based SQL injections
Logic-based SQL injections
CSV injection, on the other hand, occurs when websites include untrusted inputs in their CSV files. Any cell that contains = will be deemed a formula by Microsoft Excel or any other spreadsheets tool when that CSV file is opened.
3. Arbitrary Code Execution:
When an attacker wins arbitrary code execution rights on a process, they can run any code or command they choose. It’s a flaw in either the hardware or software that’s in charge of processing arbitrary code. Because these exploits are extremely vulnerable, they should be removed from services and applications used by the general public right away. Force programs to only read tokens established previously during development is one technique to solve this problem. By submitting a request to a server, the system can generate suitable headers. Developers must respond quickly to prevent such assaults, or their applications will become vulnerable.
4. Server-Side Rendering Attack:
Developers may be required to render an application on the server-side in some cases. Regrettably, this increases the risk of data leakage. If your code uses JSON strings to convert data to strings, you should always be on the lookout for server-side rendering attacks. It will be more difficult to detect server-side rendering attacks if you have not detected the context data.
5. Insecure Randomness:
6. Malicious Package:
What if a malicious version of React is published directly by an attacker?
What if a hacker gets direct publish access to popular npm modules and uses them to distribute a harmful module? Apps created by developers using these modules will be insecure. A malicious module or package gathers data from your system and network and sends it to a third party, or it can run malicious malware during the installation process. To fool developers into downloading malicious packages, attackers utilize typosquatting, a technique that involves naming packages after their real-world equivalents. It can wreak havoc on your system once downloaded and installed.
7. Zip Slip:
Zip slip is caused by a combination of rewritten arbitrary files and a directory traversal attack. For this, files can be extracted from the archive of a certain directory. When archive files are unzipped with a susceptible library, attackers have the potential to unzip a malicious file as well. Attackers can easily overwrite the file once the unzipping procedure is complete.
Unfortunately, any sort of file, including executables, configuration files, and key system files, might be affected by this form of attack. In other words, an attacker can simply access arbitrary code from afar. When developers are using the same version of archive processing libraries, they can detect this type of assault. Once you’ve identified the flaw, you may put it through a directory traversal test and include zip slip in your security testing. These types of attacks can also be detected using dependency vulnerability detection techniques.
More content at PlainEnglish.io. Sign up for our free weekly newsletter. Follow us on Twitter and LinkedIn. Join our community Discord.
For further actions, you may consider blocking this person and/or reporting abuse |
In the movies, a security operations center has cool displays with surveillance software that instantly warns protectors with “Intruder Detected” in a flashing OCR font, and visualize the movement of the attacker through the network. If only it were that easy – the system would just stop the intruder in the first place.
In the real world it's so different. You have a flood of information (though most of it is actually data – not information) and vague indicators of attack that could be legitimate anomalies, completely false positives or – and I hate these as much you do – just inexplicable “weirdness”.
But the more you know your systems, the more data you collect, and the more powerful your analysis tools, the better your likelihood of actually catching attackers before they do real damage to your business. In this real training for free ™ webinar we will look at 5 indicators that evil is present on a Windows host:
- Rogue process detection
- Evidence of persistence
- Suspicious traffic
- Unusual OS artifacts
- Command/user role mismatches
I'll explain each of these indicators in detail and provide real world examples.
We will explore useful resources such as the National Software Reference Library (NSRL) which is a collection of known legitimate software programs that you can use to greatly reduce false positive rogue process detections.
This is all part of what many folks are calling Endpoint Threat Detection and Response ("ETDR"). A. N. Ananth, CEO of our sponsor, EventTracker, will briefly show you how EventTracker can automate detection of these five indicators of event. We'll also talk more about what ETDR means, how new is it, and whether or not it's just the next iteration of IDS. We'll also discuss ETDR relationship to SIEM and how log collection agents are the new "endpoint sensors".
Don't miss this technical and timely real training for free ™ event. Please register now. |
IT security is also an important building block in data protection
Do you use Word and Excel and other services from Microsoft 365 in your company? Hardly any other office software is so widely used and can store so much confidential information. It is therefore a particularly valuable target for hackers and criminals of all kinds. It is worthwhile for companies to invest in employee awareness and in measures to protect the system – and also to avoid data breaches in the first place.
We have compiled a few suggestions here on how you can ensure (even) somewhat better protection (in no specific order, this list is not exhaustive):
- Activation of 2-factor authentication:
It is very beneficial for security if employees log in, not only with their password, but with an additional factor as well. This factor could be sent to the cell phone for example via text message or an authenticator app. For applications installed on a device, this factor is only queried initially and then at greater intervals, so that there is hardly any disruption to daily work. For logins via portal.office.com it is possible to set that this additional code is requested every time. Even if a password has been lost, an unauthorized person could no longer log in with this password alone.
- Disable Word macros:
Word documents that come in as email attachments can contain malicious macros (for example, encryption Trojans). Therefore, the settings on all employee accounts should be configured to not enable macros in Office documents by default. You can enable them on a case-by-case basis if the document comes from a trusted source.
- Protected view for files from the web:
Files loaded from the Internet should automatically be shown in protected view only. Editing (and executing malicious code) is thus initially not possible. Users must check the trustworthiness of the message before deliberately sharing it.
- Individualized optics through company branding:
The online version of Microsoft 365 (in the browser) can be secured by a company’s own design. Using the company logo and individual backgrounds, employees recognize that they are in the real portal. This makes it much more difficult to tap into login data via fake pages.
Do you have any other ideas on how to improve security in Microsoft 365 and other services? Then feel free to write us a message.
The more data and especially the more sensitive data is beeing processed on IT systems, the better they must be protected – this applies to Microsoft 365 in particular. |
Consider Funshion malware. Sometimes classified as "aggressive malware," the base code is over four years old and is still bypassing endpoint protections. Funshion makes minor modifications to itself, rendering it invisible to the rules or signatures designed to catch it. Today there are well over a dozen variants in the wild, each designed to beat static rules. Each variant is essentially a new attack that rules cannot stop.
The good news is that AI has been able to do what rules cannot: understand that subtle variations of malware are still malware. This means AI can detect known attacks as well as attacks it has never seen before. This distinction alone puts it well beyond the capabilities of rules. So how does it work? |
Forensics & Incident Response
( Discover “who is” behind attacks and threats)
When an attack occurs, you need answers fast. Threat actors don’t rest, and they don’t make it easy to discover who they are and what their network really looks like. They continue to attack from their virtual hiding places as long as possible but every online activity leaves a fingerprint. Analyzing the domain name, DNS and other Internet data is a powerful way to trace the fingerprints back to the perpetrators. Finding out the true identities of threat actors and the extent of their networks of holdings requires a broad array of current and historical data. HF is the best solution to help investigators find real answers in domain name, DNS and Internet data. |
DLP processes are information protection practices created to protect sensitive or critical data in the corporate network from being erroneously or maliciously lost, misused, or accessed. DLP controls help mitigate the risk of data leakage, loss, and exfiltration by ensuring that sensitive information is identified and risk-appropriate controls are deployed to protect the information, while at the same time allowing organizations to access the data to conduct regular business. DLP is not a single piece of software, but an important component of a comprehensive data security and privacy program. To have an effective DLP strategy requires a comprehensive approach to data protection.
DLP software solutions allow administrators to set business rules that classify confidential and sensitive information, so that it cannot be disclosed maliciously or accidentally by unauthorized end users. DLP solutions can also go beyond simple detection, providing alerts, enforcing encryption, and isolating data.
Protect sensitive information with a solution that is customizable to your organizational needs. When your job is to protect sensitive data, you need the flexibility to choose solutions that support your security and privacy initiatives.
The framework puts data discovery, classification, and protection at the front-end of enterprise privacy, security, and compliance programs. It enables organizations to automatically and persistently discover, classify, understand, control and protect sensitive data to ensure compliance while allowing for greater business agility.
Seclore Rights Management, the only entirely browser-based security solution, ensures sensitive information, digital assets, and documents can be protected and tracked wherever they travel and are stored with granular, persistent usage data security controls. When integrated with Spirion, Seclore can invoke specific protections based on Spirion classification tags.
De-identification is a process for removing personally identifiable information (PII) from a data set to protect the privacy of individuals, since once de-identified, a data set is no longer considered to contain personal information. This reduces the risk of non-compliance with data privacy and security regulations.
For example, you might have a DLP policy that helps you detect the presence of information subject to the Health Insurance Portability and Accountability Act (HIPAA). This DLP policy could help protect HIPAA data (the what) across all SharePoint Online sites and all OneDrive for Business sites (the where) by finding any document containing this sensitive information that's shared with people outside your organization (the conditions) and then blocking access to the document and sending a notification (the actions). These requirements are stored as individual rules and grouped together as a DLP policy to simplify management and reporting.
For over 30 years Boldon James software has helped organizations manage sensitive information securely and in compliance with legislation and standards. The Boldon James Classifier products classify and protectively mark emails, documents, and files from to improve data loss prevention and reduce archiving costs. Boldon James is a wholly-owned subsidiary of QinetiQ, with offices in the US, Europe, and Australia, and channel partners worldwide.
Trellix Data Loss Prevention scans, detects data, and enforces appropriate actions using contextual awareness to reduce the risk of losing sensitive data through exfiltration. When sensitive data has been legitimately sent to authorize users outside the organization, Fasoo Enterprise Digital Rights Management protects the data from subsequent transfers to unauthorized users. Fasoo Enterprise Digital Rights Management integrated with Trellix Data Loss Prevention forms an essential solution to protect sensitive data, both within and outside of the organization. Considering most data leaks originate from insiders who have or had authorized access to sensitive documents, organizations must enhance existing security infrastructures with data-centric security solutions to persistently protect data in use. This integrated solution enables organizations to allow Trellix Data Loss Prevention to scan DRM-protected documents and apply policies; enforce policy engines to encrypt (reclassify) as DRM-protected documents; and secure data persistently to reduce the risk of losing sensitive data from both insiders and outsiders.
Securonix provides a leading information risk intelligence platform for security and compliance professionals. The platform consumes identity, access, and activity information from any source and then uses behavior, access, and identity risk analytics to continuously identify the highest risk users, resources, and activity in the environment for proactive management. At the enterprise application level, such as SAP and Oracle, Securonix goes deeper to automatically and continuously identify and fingerprint sensitive data for data loss protection while monitoring high-risk activity and access.
Spirion (formerly Identity Finder) is the leading provider of sensitive data risk reduction solutions. The company's flagship product, the Spirion data platform, accurately finds all sensitive data, anywhere, anytime, and in any format on endpoints, servers, fileshares, databases, and in the cloud with practically zero false positives. For more than a decade, Spirion has been helping organizations eliminate and prevent sensitive data sprawl by reducing the sensitive data footprint by 99% or more and operationalizes data protection policies and controls to meet a broad range of compliance requirements from PCI to PII to HIPAA and beyond. Spirion is used by thousands of organizations among leading firms in the healthcare, public sector, retail, education, financial services, energy, industrial, and entertainment markets.
Absolute (Vancouver, Canada). Absolute offers near real-time security breach remediation. The company's Absolute Persistence product, a self-healing endpoint security technology, provides IT personnel control over devices and data. The company's cloud-based visibility allows for remote IT asset management and security for healthcare providers, including support from its healthcare information security and privacy practitioners and ASIS-certified protection professionals.
Dataguise (Fremont, Calif.). Dataguise provides a solution for global data governance, allowing organizations to detect, protect and monitor sensitive data in real time on the premises and in the cloud. Healthcare organizations can use the company's Hadoop product to streamline and analyze billing data to reduce costs and fraud incidents; digitize patient records; and incorporate sensor and internet of things health monitoring data.
Varonis (New York City). Varonis' platform collects, stores and analyzes metadata in real time to protect data from cyberattacks. Organizations can monitor their unstructured data using the company's platform. Varonis specializes in protecting file and email systems storing spreadsheets, word processing documents, presentations and audio and video files that contain sensitive information. The company also offers a HIPAA compliance crash course.
Virtru (Washington, D.C.). Virtru's products allow businesses and individuals to control access to emails, documents and dataregardless of where the files are shared. In the healthcare space, the company's technology allows providers to share HIPAA-compliant emails and attachments, automatically identifying and encrypted personal health information. The company focuses on business privacy and data protection for more than 5,000 organizations worldwide.
Zenedge (Aventura, Fla.). Zenedge offers security for web applications and networks. The company's platform stops malicious bot traffic and distributed denial-of-service attacks and offers ongoing monitoring and security updates. The company's cybersecurity platform includes an artificial intelligence engine and advanced bot mitigation and management. Zenedge's cybersecurity solution can protect medical records and health information. Zimperium (San Francisco). Zimperium is a mobile threat management platform designed to deliver continuous cyberthreat protection for mobile devices and applications. This on-device solution can detect threats in real time. As healthcare organizations rely on mobile devices to communicate and provide better care in the hospital and home care settings, Zimperium's zIPS app provides continuous self-service mobile threat detection and remediation.
If your business adapts a data-centric security approach, i.e., the use of technologies like encryption and tokenization, we are sure you can minimize the risk of attacks. Further, your business can protect its sensitive information from reaching attackers.
IBM Security is one of the best Data security and protection solution providers. The platform enables businesses to protect their sensitive information by monitoring and auditing all data activities across multiple environments. Further, it helps reduce operational complexity and enables enterprises to meet privacy regulations. With the help of the IBM data security platform, you get greater visibility and generate insightful reports that further help in investigating and remediating cyberthreats. Users can also easily discover and manage data security vulnerabilities in real-time. It also supports Encryption and tokenization techniques. If you are looking for a data security platform similar to Comforte features, go for IBM data security as it offers full visibility and covers the above-listed parameters. 2b1af7f3a8 |
Header can add headers to the current HTTP request response for several purposes, such as redirecting the request to another URL, setting the content type or content disposition for downloads.
The script can also set the response status and automatically determine the textual description by looking up a list of known response codes.
Header issues the response header output commands, once the list of headers is fully defined.
PHP 5.1 or higher
More popular HTTP
- 4.0 KB
- 05/20/2009 19:57:40 |
MPEG Video Security using Motion Vectors and Quadtrees
- Anshul Singhal, Manik Lal Das
- Journal of Mobile, Embedded and Distributed Systems ico_openaccess
Securing multimedia data communication over public channel is a challenging task for protecting digital content from content piracy. Anti-piracy, digital watermarking and ownership verication are some mechanisms for authenticating digital content. The size of multimedia data being generally huge makes it difficult to carry out data encryption and compression for real time data communication. In this paper, we discuss the partial encryption approach in which a part of the compressed data is encrypted and rest of data remain unencrypted. The approach results in signicant reduction of computation and communication time.
조회 가능한 데이터가 없습니다.
해당 참고문헌을 고객센터를 통해 등록해주시면
빠른 시간안에 노출하겠습니다. |
SoftPerfect Network Protocol Analyzer is a free professional tool for analysing, debugging, maintaining and monitoring local networks and Internet connections. It captures the data passing through the dial-up connection or Ethernet network card, analyses this data and then represents it in a readable form.
This is a useful tool for network administrators, security specialists, network application developers and anyone who needs a comprehensive picture of the traffic passing through their network connection or a segment of a local area network.
SoftPerfect Network Protocol Analyzer presents the results of its analysis in a convenient and easily understandable format. It can defragment and reassemble network packets into streams. The program also features full decoding and analysis of network traffic based on the following low-level Internet protocols: AH, ARP, ESP, ICMP, ICMPv6, IGMP, IP, IPv6, IPX, LLC, MSG, REVARP, RIP, SAP, SER, SNAP, SPX, TCP and UDP. It also performs a full reconstruction of top-level protocols such as HTTP, SMTP, POP, IMAP, FTP, TELNET and others.
The flexible system of fully-configurable filters can be used to discard all network traffic except for the specific traffic patterns you wish to analyse. There is also a packet builder, which allows you to build your own custom network packets and send them into the network. You could use the packet builder feature to check the network’s protection against attacks and intruders. |
An application memory firewall detects what other security solutions miss and protects applications and memory from cyber attack
Looking But Not Seeing
It’s the things you can’t see that threaten your network. Your anti-virus solution dutifully scans your disk drives for malicious files and quickly finds any threat it recognizes. But today’s increasingly sophisticated attacks have outsmarted A/V software by not placing malware files on your system the way that’s expected. Instead, fileless malware is inserted into memory where your scanners can’t see it or identify it. Including attacks that occur during runtime, at the process memory level.
Perimeters Filled with Gaps
The vast majority of security products focus on the pre-execution stage – identifying and stopping known malware at the network or host perimeter. That approach made sense in the past but today’s perimeter is ill-defined and filled with gaps. Current tools can’t stop malicious activity that hasn’t been seen before. These gaps in knowledge and visibility leave a significant portion of the application stack exposed. Such vulnerabilities have not been lost on attackers.
Lack of Visibility into Memory Usage Makes for Great Opportunity for Hackers
While very few security practitioners understand how process memory works, inventive attackers have found creative ways to exploit the dynamic and transient nature of process memory. Techniques include the use of fileless malware described above, memory corruption, code insertion and other evasive methods. All of these techniques bypass conventional security, can only be identified during runtime, and don’t leave evidence behind after execution.
Advanced memory hacking tools, like EternalBlue and DoublePulsar, that exploit these gaps are now widely available. Attackers frequently modify them to avoid detection and they’ve have become major attack vectors. This has resulted in an endless series of massive attacks like WannaCry, NotPetya, Industroyer, BlackEnergy, Triton and others, causing billions in losses and global disruption.
Manipulating Legitimate Processes to Weaponize at Runtime
When employing traditional malware or fileless techniques, most attacks today have some element that exploits process memory. Assailants can take advantage of host system memory management functions, buffer overflow errors, pointer arithmetic, and uninitialized memory to turn hosted applications into attack weapons. By combining flaws in software and hardware with a series of unvalidated data inputs targeting process memory, attackers corrupt legitimate processes to disable security, leak information, or execute application functions in uncommon ways.
Another common attack strategy is to escalate privileges or control legitimate processes that have high privileges. Privileged processes typically have broad access to memory, can modify system security configuration, add a trusted root certificate, change registry settings, or corrupt memory for specific code sets just as code is being executed. From here, attackers can hijack control over application servers, access databases, or use APIs to connect to other systems.
Critical application processes are at the greatest risk, including those that are running in air-gapped environments. Once skilled malicious hackers have bypassed deficient conventional security, they can setup backdoors, and dwell within networks for extended periods without setting off alarms.
Stopping Memory Exploits with Virsec’s First Application Memory Firewall
As described above, the typical methods of A/V and other security tools are powerless to spot, much less block, these attacks. Virsec takes a completely different approach to filling the void left by these other security solutions.
Virsec brings to the table the industry’s first Application Memory Firewall – a comprehensive set of memory protection capabilities that monitor and secure the critical juncture between applications and runtime process memory. Virsec effectively detects and stops advanced fileless and zero-day techniques including buffer overflow attacks, stack smashing, DLL injections, return-oriented programming (ROP) and ROP gadgets, side channel attacks and corruption of configuration data.
Mapping and Enforcing Proper Execution
Compiled applications may be complex, but they should be predictable. As they are loaded into process memory, fixed assignments are made for memory usage and source-to-target transitions. Virsec’s patented Trusted Execution™ maps all legitimate memory assignments, creating a reference AppMap. If there is any deviation during execution, this is a positive sign of compromise, and Virsec stops the exploit within milliseconds. Because this process is deterministic, Trusted Execution eliminates false positives.
The Application Memory Firewall has a broad range of additional capabilities. Many attacks insert rogue DLLs directly into process memory as applications are executing. Virsec automatically whitelists legitimate DLLs and can instantly detect these changes and restore the correct DLLs. The solution can also detect and stop buffer overflows, stack smashing and a wide range of side-channel attacks.
Take Action Without Guessing Before Damage Is Done
Stopping attacks before they execute relies on prior knowledge, signatures and rules, or guessing. While endpoint protection vendors claim that machine learning, AI or other mysterious tools can predict what’s coming next, this always relies on learning, guesswork and outsmarting equally inventive attackers. As we’ve seen repeatedly, smart attackers can easily fool predictive models and stay several steps ahead.
Stopping attacks after the fact is too late. Endpoint protection and response (EDR) tools claim to learn from attacks so they can be stopped when they reappear. But closing the barn door after the horses have escaped is rarely satisfactory.
Only Virsec stops attacks during execution, in runtime to keep applications on track. The Application Memory Firewall delivers results that are far more effective and nearly instantaneous with unprecedented accuracy. This eliminates the scourge of false positives that undermine many security tools and drown out real attacks.
Virsec does more than detect attacks. Because of its accuracy, Virsec can automatically take surgically precise protection actions within milliseconds to terminate rogue processes, disconnect specific rogue users, restore corrupted libraries, or signal network tools like firewalls or WAFs to disable attackers at the network perimeter.
In short, rather than endlessly chasing an endless stream of unknown and external threats, Virsec focuses on what applications should be doing, how they are actually executing during runtime, down to the memory level, and ensuring they don’t do anything else.
Learn more about Virsec’s First Application Memory Firewall |
HAPI: Onchain Cybersecurity Protocol for DeFi projects
HAPI is a one-of-a-kind decentralized security protocol that prevents and interrupts any potential malicious activity within the blockchain space. HAPI works by leveraging both external and off-chain data as well as on-chain data accrued directly by HAPI and is publicly available.
One can imagine HAPI being an all-encompassing, overarching protocol that combines crypto intelligence data from multiple sources allowing the most accurate information on malicious activity, and compromised wallets.
HAPI is the only crypto cybersecurity solution that can be integrated into DEXes and DeFi protocols preventing Money Laundering by projects embedding/calling HAPI Smart Contracts and checking each transaction via the database.
There are 3 HAPI use cases:
- Reporting and Alert System. HAPI enables anyone to report unlawful players to the database and alert those in the network in our real-time updating, and live-monitoring RCI (report and check interface). By simply staking HAPI and reporting malicious actors, you can help us make crypto a safer place!
- Check address. HAPI offers a yet unseen, publicly available check functionality that allows vetting an address on the previously involved malicious activity. In this way, you can be sure that the address of an individual you are about to interact with hasn’t been engaged in any underhand schemes before.
- Smart Contracts for DEX and DeFi Protocols. HAPI is the only security solution that embeds directly into DeFi and aids in Money Laundering prevention from within. We supply our Smart Contracts with the most accurate data from multiple sources and utilize our own cybersecurity department, HAPI Labs, to secure DeFi in the most efficient way possible.
PI is not just a protocol. HAPI is the coalescence of a myriad of methods to contend with the deluge of exploits within the blockchain ecosystems. We are using the most robust, and technologically novel tools that help us to gain an edge in the battle against malicious intrusions. Live automated monitoring and tracing, address vetting and case categorization, case-by-case deep analysis of threats. Our team also provides a continuous involvement of HAPI Labs, a cybersecurity agency that consists of 10 security specialists that manually investigate each partner’s case.
HAPI makes a difference where difference is bound to be made! Created to prevent! |
The security challenges presented by the mobile and web applications, services, and server approach are formidable and unavoidable. Many of the features that make mobile and web applications, services, and servers attractive (including greater flexibility and accessibility of data, dynamic application-to-application connections, and relative autonomy (lack of human intervention), are at odds with traditional security models and controls.
In this training you will learn how to secure mobile and web-based applications, web services and the servers they run on and also integrate robust security measures into the application development process by adopting proven architectures with adherence to OWASP best practices.
In addition, you will learn how to implement, test and audit secure applications; identify and mitigate application vulnerabilities, encrypt Mobile and Web traffic by leveraging on SmartLearning’s application security expertise.
You will attain skills and knowledge to:
- Correctly implement and test secure Mobile and Web applications
- Identify, analyse, and remediate Mobile and Web applications security risks
- Encrypt Web traffic with https
- Identifying & protect core Ajax components
- Secure XML web services
- Detect unauthorized file system modification
- Avoid cross site scripting (XSS)
- Prevent code injection with input validation
- Implement URL access restrictions
- Identify & mitigate the OWASP Top Ten vulnerabilities
- Audit Mobile and Web application security
Who Should Take this Course?
- Application Security Specialist
- Mobile & Web Developer
- Application Penetration Tester
- Application Security Engineer
- Application Security Architect
- Application Security Fundamentals
- Malware Analysis
- System & Network Security
- Penetration Testing
- Vulnerability management |
The time to breach a honeypot? Blink and you'll miss that you've just got compromised.
Security researchers who set up 320 cloud honeypots globally to learn how quickly threat actors would target exposed cloud services found that 80% of them were breached in under 24 hours -- with a single threat actor compromising 96% of 80 Postgres honeypots globally within a blistering 30 seconds.
Palo Alto Network’s Unit 42 evenly deployed four types of honeypots, with with remote desktop protocol (RDP), secure shell protocol (SSH), server message block (SMB), and Postgres database services publicly exposed. These were kept live for a month to gather intelligence between July and August 2021.
Its researchers intentionally configured a few accounts with weak credentials that granted limited access to the application in a sandboxed environment, with each honeypot reset and redeployed when a threat actor successfully authenticated via one of the credentials and gained access to the application.
The time to breach a honeypot varied, but all of the 320 honeypots werer breached within a week. (Interestingly, 85% of the attacker IPs were observed only on a single day, a number that indicates that Layer 3 IP-inspecting firewalls are increasingly ineffective as attackers rarely reuse the same IPs to launch attacks.)
An earlier network scanning activity project by the group identified over 700,000 scanner IPs daily that that enumerated (the process of extracting user names, machine names, network resources, and other services from a system) more than 9,500 different ports every day and Unit 42 used that research to inform the experiment.
“We were curious whether proactively blocking the network scanning traffic could prevent attackers from discovering the honeypots and reduce the number of attacks” Unit42 noted.
“To test the hypothesis, we created an experimental group of 48 honeypots and applied firewall policies to block IPs from known network scanners. The firewall policy blocks the IPs that have been scanning a specific application daily in the past 30 days. Figure 6 compares the number of attacks observed on each honeypot between the control group (no firewall) and the experimental group (with firewall). We could not see a significant difference between the two groups, meaning blocking known scanner IPs is ineffective in mitigating attacks…”
Palo Alto Networks said it recommends all cloud service users:
- Create a guardrail to prevent privileged ports from being open.
- Create audit rules to monitor all the open ports and exposed services.
- Create automated response and remediation rules to fix misconfigurations automatically.
- Deploy next-generation firewalls in front of the applications to block malicious traffic.
Honeypots can be a vital and low cost way of gaining intelligence into what's likely to hit you.
As security researcher Jason Schoor earlier told us: "Public-facing honeypots can provide a wealth of insight into active reconnaissance campaigns against your network: the types of attacks, origination, and more, all in (almost) real time. Placed within a network, honeypots can produce high fidelity detection alerts.
"As these systems should have no actual production value, any attempted accesses whatsoever should be considered malicious and trigger incident response activity, including detection rule tuning and security control effectiveness review for the real production systems you’re trying to protect."
One highly regarded honeypot-type security tool (not actually a honeypot as you'd know it) that can be actively deployed throughout your internal network (hardware, virtual or cloud-based) is Thinkst Canaries, which are deployed inside your network and communicate with the hosted console through DNS; i.e. the only network access they need is a DNS server that's capable of external queries. They come recommended. |
Hack 38. Save Paper by Reducing Whitespace
Empty data fields in a long report can pose a problem when it comes time to print the report. Imagine a list of 1,000 contacts, and only half have a phone number entered into a phone number field. When you designed the report, you included the phone number field for contacts that have a phone number. However, when you print the report, you see 500 empty spaces representing the phone number fields of customers without phone numbers. When other data fields are empty as well, the situation just gets worse.
All told, this whitespace can account for 50 or more extra pages in the report, depending on how the report is laid out. Figure 4-35 shows a report that suffers from this problem. Some contact information is missing, yet room is still set aside for it.
Figure 4-35. A waste of paper
Figure 4-36. Setting the Can Shrink property
On the property sheet, set the Can Shrink property to Yes. Apply this to the fields in the detail section and to the detail section itself.
With Can Shrink set to Yes, any empty data fields on this report won't take up space. Figure 4-37 shows the improved report.
Figure 4-37. A more compact report
In that figure, you can see some contacts are missing a phone number, and others are missing all the data. In both cases, the amount of empty space shrinks. As you can see when comparing the report in Figure 4-37 with the one in Figure 4-35, even the first page displays more contacts. As wasteful whitespace is dropped, the number of pages on the report is reduced.
The Can Grow property provides the opposite functionality as well. When designing a report, place controls where they make the most sense. Occasionally, you might have more data than you can display in the field, given the size of the bound control. Setting Can Grow to Yes lets the field expand as needed to print all the data. |
The Wazuh Open Source Security Platform integrates with the Elastic Stack to allow you to quickly access and visualize alert information, greatly helping during an audit or forensic analysis process, [...]
Emotet is a malware originally designed as a trojan, and mainly used to steal sensitive and private information. It has the ability to spread to other connected computers and even [...]
Fluentd is an open source data collector for semi and un-structured data sets. It can analyze and send information to various tools for either alerting, analysis or archiving. The main [...]
Metasploit is the most used penetration testing framework in the world. It contains a suite of tools that you can use to test security vulnerabilities, enumerate networks, execute attacks, and [...]
Wazuh can integrate with YARA in different ways. This blogpost will focus on automatically executing YARA scans by using the active response module when a Wazuh FIM alert triggers.
Learn how to monitor the data stored in your S3 with Amazon Macie and Wazuh.
Learn how to keep track of changes made to your AWS resources and monitor user activity with AWS CloudTrail and Wazuh.
In 2019, more than 700 vulnerabilities were discovered in Microsoft operating systems. As soon as they are in [...]
Wazuh provides an out-of-the-box set of rules used for threat detection and response. This ruleset is continuously updated [...]
In this post we will make the necessary steps to deploy a Wazuh cluster with [...]
In Windows systems, a Group Policy Object (GPO for short) is a feature that allows an administrator to tune the operating system’s settings and they’re widely used in Active Directory [...]
Wazuh helps you comply with the security standards in which logs are required to be maintained for several months so that they can be provided on the spot in case [...] |
It is currently unknown how these websites are being compromised. In keeping with WordFence, a seller of security products for WordPress, the hacker works by way of adding a Php file with 25,000 lines of code to all websites he manages to advantage access.
This document is a bot patron which connects to an IRC (Internet Relay Chat) server and listens to commands published inside the primary chat. Each time the botnet’s proprietor logs in and offers out a command, all infected web sites execute it.
While WordFence has now not elaborated at the bot consumer’s technical talents, such botnets may be used to release DDoS attacks, brute-force assaults, insert Search engine optimization spam on the compromised web sites, or send junk mail e mail from the underlying compromised servers.
“A 4-yr-vintage mystery resolved”
The 25,000 bot client record contained configuration information, along with the IRC server’s IP cope with, port, and channel call (#1x33x7). Researchers took a observe what was in the botnet’s manipulate panel, which being an IRC chat room, allowed them to attach freely.
After getting access to the IRC channel, WordFence researchers controlled to crack a protracted-lasting thriller: the botnet’s password.
Study extra: http://news.Softpedia.Com/news/german-Man-In the back of-irc-Managed-wordpress-botnet-507610.Shtml#ixzz4IQiKomV1
This precise botnet changed into secured with a hashed password string: 2cbd62e679d89acf7f1bfc14be08b045, which allowed the botnet owner to authenticate every command they surpassed in the most important IRC chat room.
Site owners that noticed their hacked web sites, frequently requested for assist in cracking this password, but to no avail. A Google seek exhibits requests as early as December 2012, which means the criminal’s botnet has been round for nearly four years.
Due to the fact researchers had gotten right of entry to the main IRC window, they have got visible the criminal trouble out commands, and authenticating with the password in its cleartext model: 1x33x7.0wnz-you.************[REDACTED].
“Hunting down the botnet’s operator”
In this equal chat room, researchers located a listing of infected websites, proven as the chat room’s customers, with technical details about the compromised platform as usernames.
The listing of hacked web sites protected the whole lot from Apache servers on FreeBSD to rarer instances of Home windows Server 2012 or Windows 8.
Related Articles :
within the person listing, additionally they discovered debts belonging to the botnet’s grasp: LND-Bloodman and da-actual-LND.
IRC chat rooms allow members to run primary “whois” instructions that screen information about other customers. Strolling a whois query for the criminal’s bills confirmed IP addresses and a probable electronic mail address containing the crook’s first call.
“Botnet operator is based totally in Germany”
The IP address changed into from Germany. The Bloodman account and the IRC channel’s name 1x33x7, additionally used by the attacker as an opportunity username, pointed investigators to diverse social media bills on Twitter, YouTube, and YouNow. this money owed showed that the criminal is a German-speakme Man.
Further incriminating proof was observed on his YouTube channel, wherein he published a video where he bragged approximately his botnet. This video related his real lifestyles personality with the usernames used in the source code of the botnet’s purchaser document.
With the botnet’s password in hand and his real identity established WordFence ought to now take down his botnet and report his criminal interest to German authorities.
On its blog, in the comment fields, a WordFence spokesperson said it did now not notify government approximately the botnet’s presence, especially Due to the fact it might be too time-eating for the employer.
Furthermore, the Laptop Fraud and Abuse Act (CFAA) also prevents the employer from taking down the botnet without consent from authorities, so at the time of writing, the botnet remains lively. |
by Lee, C. C., Tan, T. G., Sharma, V., & Zhou, J.
The threat of quantum computers is real and will require significant resources and time for classical systems and applications to prepare for the remedies against the threat. At the algorithm-level, the two most popular public-key cryptosystems, RSA and ECC, are vulnerable to quantum cryptanalysis using Shor’s algorithm, while symmetric key and hash-based cryptosystems are weakened by Grover’s algorithm. Less is understood at the implementation layer, where businesses, operations, and other considerations such as time, resources, know-how, and costs can affect the speed, safety, and availability of the applications under threat.
We carry out a landscape study of 20 better-known threat modelling methods and identify PASTA, when complemented with Attack Trees and STRIDE, as the most appropriate method to be used for evaluating quantum computing threats on existing systems. We then perform a PASTA threat modelling exercise on a generic Cyber-Physical System (CPS) to demonstrate its efficacy and report our findings. We also include
mitigation strategies identified during the threat modelling exercise for CPS owners to adopt.
Published at https://link.springer.com/chapter/10.1007/978-3-030-81645-2_11
You may access the article here: https://pureadmin.qub.ac.uk/ws/portalfiles/portal/240369002/Quantum_Computing_Threat_Modelling_on_a_Generic_CPS_Setup_ACNS_Workshop_20210503.pdf |
Data transmission testers (DSTs) are the latest in a long line of experts who have developed tools for helping people to protect their data.
But the DSTs they’re trying to protect are rarely easy to use.
In fact, there’s a lot of overlap between DST and other types of protection tools.
We put together this guide to help you get started.1.
Data transmission tools are designed for data storage and transmission.
DST tools protect data only.2.
DSS tools use software to transfer data to and from devices and servers.
Data can be encrypted, but most tools don’t use it.3.
Data storage is a very specific type of storage.
Many DST services rely on data to be stored, not to be used.4.
Data is only transferred in the form of data packets.
Dst tools only transfer data in the binary form, such as text or images.5.
Most DST applications can be downloaded and installed from a web site.
Some tools have built-in websites to support those.6.
Most tools use the same data transmission process.
For example, DST apps typically use a TCP connection to transfer files, and the tool then uses an HTTP connection to retrieve data.7.
DSt tools usually rely on network connectivity to allow for high throughput data transfers.
For this reason, Dst services may require a relatively large number of servers.8.
Most services can be run on any computer.
This is because most tools use network protocols to support a common data transmission protocol, and they all use the Internet to communicate with each other.9.
Data sent over the Internet is typically encrypted.
Dsts typically use cryptographic keys to secure the data sent over TCP connections, which are encrypted using a public key encryption algorithm.10.
Data transferred over the internet is often encrypted with an RSA-based public key, or RSA-1024, which is a standard encryption algorithm that’s common in the industry.11.
The best DST service is usually an open source one, but many DST programs are open source.
For some, this means that you can install them on a computer or workstation.
In these cases, most services rely heavily on third-party open source tools, which allow you to use your own tools to protect your data.12.
The majority of DST solutions do not have any real-time data backup and recovery.
Instead, they provide tools that let you save data only when you need it, but not automatically.13.
Many services don’t require that users log into their DST app.
Instead of having to create a login page, they simply send you a link to a page that lets you log in.
If you click on the link, the data transmission tool sends a request to the server that will then take the user’s IP address and then encrypt the connection.
This can take anywhere from a few minutes to several hours.14.
Some DST systems rely on automatic encryption.
For these tools, users can either log into the service by clicking on a link on the tool’s page, or by clicking the “Connect” button at the top of the page.
This process can take up to a minute.15.
Some services can’t be used if the network is too slow.
In this case, you can still encrypt data and send it to the device you want to use, but it’ll take more time.16.
Some providers require that your data is encrypted with a password, but some services don and don’t.
The service provider can choose to use an automatic password, which encrypts the data in a specific format.
For DST, this usually means a password that is generated from the user.
Some applications do not use passwords at all.17.
Some devices use hardware-based encryption that uses algorithms that don’t rely on the hardware to provide the data.
This means that it’s more secure for your data to remain encrypted.18.
Some digital rights management (DRM) technologies are used to protect data stored on devices.
These DRM systems include password-based access controls, encryption technologies, and other safeguards.
For more on DRM, read our guide on Digital Rights Management.19.
Some data protection applications also use encryption to protect the data on servers.
These protection methods are sometimes called “cloud storage.”20.
Data protection tools can only protect data when it’s in a particular format or when it hasn’t been modified in any way.
That means you can’t protect data with encryption tools, for example, or if it’s been deleted from the cloud.21.
Some apps will use software in the background to store data, but there’s no guarantee that it will keep the data safe.
In some cases, apps that protect data in this way may need to download data from servers before it can be protected.22.
Some software can be installed on the phone of a DST client. For |
Building Intrusion Detection Honeypots
When an attacker breaks into your network, you have a home-field advantage. But how do you use it?
Although different attackers might attack your network in unique ways, their broad motivations and movements reveal common patterns that defenders can take advantage of. When you pair these patterns with your knowledge of your own network, you create a scenario ripe for deception.
By strategically giving attackers things they want to find, you can lure them into exposing themselves. Intrusion Detection Honeypots are the tools that make this possible.
Intrusion Detection Honeypots are security resources placed inside your network whose value lies in being probed and attacked. These fake systems, services, and tokens lure attackers in, enticing them to interact. Unbeknownst to the attacker, those interactions generate logs that alert you to their presence and educate you about their tradecraft.
While traditional detection mechanisms like IDS can be effective, they are often time-consuming to maintain and tune. Analysts spend significant time dealing with false positives, which makes IDS inaccessible to smaller organizations. With honeypots placed inside your network, nobody should ever legitimately interact with one. Without legitimate traffic to sift through, any interaction becomes anomalous, limiting the potential for false positives. That makes IDH an incredibly high-efficacy form of intrusion detection that requires minimal tuning. IDH scales down just as well as it scales up.
While the concept of IDH has been around for a while, many myths exist about using honeypots for detection. Until recently, there hasn’t been any formal education on leveraging this technology in production networks.
It’s time we change that and empower defenders with the framework and tools they need to leverage deception against attackers.
Building Intrusion Detection Honeypots will teach you how to build, deploy, and monitor honeypots designed to catch intruders on your network. You’ll use free and open source tools to work through over a dozen different honeypot techniques, starting from the initial concept and working to your first alert.
Building Intrusion Detection Honeypots is the seminal course on strategic honeypot deployment for network defenders who want to leverage deception to find attackers on their network and slow them down.
- What makes an intrusion detection honeypot different from research honeypots.
- How to leverage the four characteristics of honeypots for the defender’s benefit: deception, interactivity, discoverability, and monitoring.
- How to think deceptively with an overview of deception from a psychological perspective.
- How to use the See-Think-Do framework to integrate honeypots into your network and lure attackers into your traps.
- Tools and techniques for building service honeypots for commonly attacked services like HTTP, SSH, and RDP.
- How to hide honey tokens amongst legitimate documents, files, and folder.
- To entice attackers to use fake credentials that give them away.
- Techniques for embedding honey credentials in services and memory so that attackers will find and attempt to use them.
- How to build deception-based defenses against common attacks like Kerberoasting and LLMNR spoofing.
- Monitoring strategies for capturing honeypot interaction and investigating the logs they generate.
For each honeypot, I’ll explain its overall goal and how it allows you to control what the attacker sees, thinks, and does. I’ll demonstrate the step-by-step instructions of how to build the honeypot. I’ll also advise on how to place it for discoverability in your network, and we’ll walk through considerations for making your honeypot more interactive to collect additional intelligence about the attacker. Finally, I’ll show you how to configure monitoring and alerting for the honeypot so you’ll know when an attacker interacts with it.
Intrusion Detection Honeypots are one of the most cost-effective, reliable forms of intrusion detection. If you want to start learning how to use deception against attackers with honey services, tokens, and credentials, Building Intrusion Detection Honeypots is the course you’re looking for.
Building Intrusion Detection Honeypots Includes:
Over 12 hours of demonstration videos. These videos will provide step-by-step walkthroughs of setting up each individual honeypot and considerations for deception, discoverability, interactivity, and monitoring.
The Intrusion Detection Honeypots book. You’ll receive a free electronic copy of Intrusion Detection Honeypots: Detection through Deception by Chris Sanders.
Hands-on labs to help you develop and test your honeypots. For each honeypot I demonstrate, I’ll discuss how to get logs from it and ship them to monitoring infrastructure. If you’ve already got monitoring infrastructure to receive those logs, great! If not, I’ll show you how to build a simple ELK-based receiver to capture logs and how to leverage third-party automation services like Zapier to generate alerts. These mechanisms make honeypot-based alerting accessible to organizations of all sizes.
Configuration files to help you along. I’ll provide logging configurations (Logstash, Winlogbeat, Filebeat) and detection signatures (Sigma and Suricata) for every honeypot I demonstrate in the class.
Connect with deception-minded practitioners. You’ll build the honeypots I demonstrate and discuss unique ways to deploy deception techniques with other students.
Access to Chris Sanders office hours. I maintain open office hours for students of my Applied Network Defense courses, with a few sessions per month. I’ll be available during this time for face-to-face video to answer any questions you may have or anything you’d like to discuss related to the course material or how you might apply it in your work.
Participation in our student charitable profit-sharing program. A few times a year we designate a portion of our proceeds for charitable causes. AND students get to take part in nominating charities that are important to them to receive these donations.
Frequently Asked Questions
Is this course live?
This is NOT a live course. It’s an online video course you can take at your own pace.
How long do I have access to the course material?
You have access to the course for six months following your purchase date. If you need more time, you can extend your access for a small monthly fee.
Are there any prerequisites or lab requirements for this course?
This course is designed for all security practitioner skill levels and assumes no prior honeypot experience. However, it is helpful to have a basic understanding of security monitoring. There are no specific system requirements for this course, but if you want to follow along building every honeypot I discuss, you’ll need access to Windows and Linux systems.
How much overlap is there between the book and the course?
The Building Intrusion Detection Honeypots course is based on the Intrusion Detection Honeypots: Detection through Deception book. You can think of the book as a textbook for the course. However, the course allows for more detailed hands-on demonstrations, the discussion of additional nuance, and coverage of more scenarios for deploying honeypots on your network. The course also contains additional honeypot techniques not covered in the book and will have more added over time.
How much time does it take to do this course?
Given the amount of content, it takes people dramatically different times to complete the material. If you focus most of your time on it, you can complete everything in about a week. Most choose to spread it out over a few weeks as they practice the concepts demonstrated.
How many CPEs/CMUs is this course worth?
Organizations calculate continuing education credits in different ways, but they are often based on the length of the training. This course averages 15 hours of video+lab work.
Do you offer discounts for groups from the same organization?
Yes. To inquire about discounts or group invoices, please contact us at [email protected].
Bulk discounts are available for organizations that want to purchase multiple licenses for this training course. Please contact us to discuss payment and pricing. |
When reviewing IPS attack logs, Web Attack entries show the Remote Host IP address as 127.0.0.1 with a remote port of 9000.
The most likely reason for this is that Zscaler is installed and filtering internet traffic. By default Zscaler listens on port 9000.
Release : 14.3
Zscaler examines traffic and acts as an intermediary between the client and the internet. As such, when IPS detects malicious activity from a website, it's going to detect the localhost address on port 9000 as the remote source of the attack. The Intrusion URL should remain unchanged and will still serve as an accurate source for the attack.
Verify that Zscaler is installed and listening on port 9000 before accepting this as the cause. |
MATEC Web Conf.
Volume 335, 202114th EURECA 2020 – International Engineering and Computing Research Conference “Shaping the Future through Multidisciplinary Research”
|Number of page(s)||14|
|Published online||25 January 2021|
Honeypot Coupled Machine Learning Model for Botnet Detection and Classification in IoT Smart Factory – An Investigation
1 School of Computer Science and Engineering (SCE), Taylor’s University, Malaysia
* Corresponding author: [email protected]
In the United States, the manufacturing ecosystem is rebuilt and developed through innovation with the promotion of AMP 2.0. For this reason, the industry has spurred the development of 5G, Artificial Intelligence (AI), and Machine Learning (ML) technologies which is being applied on the smart factories to integrate production process management, product service and distribution, collaboration, and customized production requirements. These smart factories need to effectively solve security problems with a high detection rate for a smooth operation. However, number of security related cases occurring in the smart factories has been increasing due to botnet Distributed Denial of Service (DDoS) attacks that threaten the network security operated on the Internet of Things (IoT) platform. Against botnet attacks, security network of the smart factory must improve its defensive capability. Among many security solutions, botnet detection using honeypot has been shown to be effective in early studies. In order to solve the problem of closely monitoring and acquiring botnet attack behaviour, honeypot is a method to detect botnet attackers by intentionally creating resources within the network. As a result, the traced content is recorded in a log file. In addition, these log files are classified quickly with high accuracy with a support of machine learning operation. Hence, productivity is increase, while stability of the smart factory is reinforced. In this study, a botnet detection model was proposed by combining honeypot with machine learning, specifically designed for smart factories. The investigation was carried out in a hardware configuration virtually mimicking a smart factory environment.
© The Authors, published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. |
Chapter 18: Integrating External Data into Excel Reporting
In This Chapter
• Importing data from Microsoft Access databases
• Importing data from SQL Server databases
• Running SQL Server stored procedures from Excel
• Creating dynamic connections with VBA
• Creating a data model with multiple external data tables
Wouldn’t it be wonderful if all the data you come across could be neatly packed into one easy-to-use Excel table? The reality is that sometimes the data you need comes from external data sources. External data is exactly what it sounds like: data that isn’t located in the Excel workbook in which you’re operating. Some examples of external data sources are text files, Access tables, SQL Server tables, and even other Excel workbooks.
This chapter explores some efficient ways to get external data into your Excel data models. Before jumping in, however, your humble authors want to throw out one disclaimer. There are numerous ways to get data into Excel. In fact, between the functionality found in the UI and the VBA/code techniques, there are too many techniques to focus on in one chapter. So for this endeavor, you focus on a handful of techniques that can be implemented in most situations and don’t come with a lot of pitfalls and gotchas.
Importing Data from Microsoft Access
Microsoft Access is used in many organizations to manage a series of tables that interact with each other, such as a Customers table, an Orders table, and an Invoices table. Managing data in Access ... |
A quiet, underground revolution is taking place in the security industry as companies shift from focusing on the perimeter to capturing and analyzing the residue left on endpoint devices by hackers and cyber attacks. Several years ago, a community of forensic researchers began reverse engineering the innards of operating systems. Their efforts led to finding "artifacts," which reveal almost all users and application interaction with the operating system. These breadcrumbs can be found deep within file systems, memory and OS system files. Unlike clearing log files, artifacts are nearly impossible to manipulate.
The residue or artifacts left behind can provide clues about an intruder to IT security professionals. For example, RAT (Remote Access Trojan) residue was important in investigating the cause of the Office of Personnel Management's (OPM) breach. OPM's intrusion prevention system essentially logged data that was being exfiltrated without detecting any of the breadcrumbs that attackers left behind.
Today's incident response and endpoint detection tools use forensic artifacts that have accumulated on endpoints. Advanced rootkits, zero-day attacks and command and control incidents leave an abundance of artifacts. Avoiding leaving a forensic trail is almost impossible.
In this slideshow, Paul Shomo, senior technical manager, Strategic Partnerships, Guidance Software, looks at forensic residue and how it can help organizations better protect themselves from security threats, both inside and outside the organization.
Experts predict how cybersecurity will affect and involve our government, policies and politics in 2017. ... More >>
Examine some of the concerns involving shadow IT security and some of the riskiest behaviors, applications and devices. ... More >>
Ransomware is a legitimate threat, with estimates from the U.S. Department of Justice showing that over 4,000 of these attacks have occurred every day since the beginning of the year. ... More >> |
Label manipulation attacks are a subclass of data poisoning attacks in adversarial machine learning used against different applications, such as malware detection. These types of attacks represent a serious threat to detection systems in environments having high noise rate or uncertainty, such as complex networks and Internet of Thing (IoT). Recent work in the literature has suggested using the K-nearest neighboring algorithm to defend against such attacks. However, such an approach can suffer from low to miss-classification rate accuracy. In this paper, we design an architecture to tackle the Android malware detection problem in IoT systems. We develop an attack mechanism based on silhouette clustering method, modified for mobile Android platforms. We proposed two convolutional neural network-type deep learning algorithms against this Silhouette Clustering-based Label Flipping Attack. We show the effectiveness of these two defense algorithms—label-based semi-supervised defense and clustering-based semi-supervised defense—in correcting labels being attacked. We evaluate the performance of the proposed algorithms by varying the various machine learning parameters on three Android datasets: Drebin, Contagio, and Genome and three types of features: API, intent, and permission. Our evaluation shows that using random forest feature selection and varying ratios of features can result in an improvement of up to 19% accuracy when compared with the state-of-the-art method in the literature.
- Adversarial example
- Adversarial machine learning (AML)
- Deep learning
- Label flipping attacks
- Malware detection
- Semi-supervised defense (SSD) |
What is Deception Technology?
Deception technology is a strategy to attract cyber criminals away from an enterprise's true assets and divert them to a decoy or trap. The decoy mimics legitimate servers, applications, and data so that the criminal is tricked into believing that they have infiltrated and gained access to the enterprise's most important assets when in reality they have not. The strategy is employed to minimize damage and protect an organization's true assets.
Deception technology is usually not a primary cybersecurity strategy that organizations adopt. The goal of any security posture is protection against all unauthorized access, and deception technology can be a useful technique to have in place once a suspected breach has occurred. Diverting the cyber criminal to fake data and credentials can be key to protecting the enterprise's real assets.
Another benefit of deception technology is research. By analyzing how cyber criminals break the security perimeter and attempt to steal what they believe to be legitimate data, IT security analysts can study their behavior in depth. In fact, some organizations deploy a centralized deception server that records the movements of malicious actors—first as they gain unauthorized access and then as they interact with the decoy. The server logs and monitors any and all vectors used throughout the attack, providing valuable data that can help the IT team strengthen security and prevent similar attacks from happening in the future.
The downside or risk of deception technology is that cyber criminals have escalated the size, scope, and sophistication of their attacks, and a breach may be greater than what the deception server and its associated shadow or mock assets can handle. Further, cyber criminals may be able to quickly determine that they themselves are being tricked as the deception server and decoy assets become immediately obvious to them. As such, they can quickly abort the attack—and likely return even stronger.
To function properly, deception technology must not be obvious to an enterprise's employees, contractors, or customers.
Why is Deception Technology Important?
Deception technology delivers several key benefits and is still considered an important component of a robust cybersecurity strategy.
Decrease Attacker Dwell Time on the Network
The decoy assets must be attractive enough for a cyberattacker to think that they are stealing legitimate assets. However, at some point, the infiltration will stop when IT thwarts the attack from spreading—and attackers figure out that they will be discovered sooner rather than later.
Alternatively, the attacker may quickly realize that the attack is on decoy assets and that the entirety of an organization's assets cannot be stolen. The attacker may quickly leave as a result, realizing the attempt to be a failed one. As such, deception technology decreases the attacker's dwell time on the network.
Expedite the Average Time To Detect and Remediate Threats
Because of the resources involved in deception technology, IT teams typically consider a cyberattack on decoy assets a "special" mission, concentrating their efforts on studying its behaviors and movements. Because of this focus, when unauthorized access is discovered or unusual behaviors are observed on the decoy assets, IT will move quickly. Therefore, deception technology expedites the average time to discover and address threats.
Reduce Alert Fatigue
Too many security alerts can easily overwhelm an IT team. With deception technology in place, the team is notified when cyberattackers breach the perimeter and are about to interact with decoy assets. Additional alerts will help them understand malicious behavior and then track the activities of the attacker.
Preventive Measures with Deception Technology?
There are various use cases of deception technology or even an entire deception network. Let us take a look at a few.
Early Post-breach Detection
While no breach is ever welcome, studying the entry point and subsequent behaviors of cyberattackers holds valuable information for IT security analysts. They can analyze attacker activity and glean key data that can be used to reinforce the network and better protect the enterprise from future attacks.
The more convincing the deception technology, including the server and associated applications and data, the longer the mock attack goes on and the more data IT can pull.
Reduced False Positives and Risk
With multiple security point products and systems in place to monitor identity, authorization, and activity, the number and frequency of alerts that IT receives can quickly become overwhelming. Much of it can be noise, and even false positives, causing the IT team to react when they do not need to—and conversely, failing to react when they need to because of too many alert notifications.
Deception technology reduces the incidences of false positives. The first and succeeding alerts to the breach can allow IT to focus on the cyberattacker's movements. Also, risk is mitigated because the attacker interacts with fake applications and assets.
Scale and Automate at Will
Scaling deception technology requires relatively less cost and effort. The decoy server can be used and reused, and it is easy to generate fake data, such as non-existent account numbers and passwords. Any automation tools used for other components of the cybersecurity suite can also be used for deception technology.
From Legacy To IoT
Further to its ability to scale and integrate with existing hardware and software, deception technology can be used with both legacy systems and newer Internet-of-Things (IoT) installations. Cyber criminals often prefer to breach legacy systems, thinking they are easier to infiltrate because the organization has not spent the time updating or reinforcing them.
Are Honeypots Still a Good Deception Technology?
A honeypot is the precursor to today's multi-faceted and more advanced cyber deception. Unfortunately, it no longer represents a good strategy for distracting attackers and protecting an enterprise's true assets.
A classic honeypot is a single asset, such as a large database of fake usernames, passwords, and other credentials. The idea behind honeypots is to have the intruder, after gaining unauthorized access to the network, follow a trail of breadcrumbs from the point of entry to the honeypot. Once the attacker accesses the honeypot, IT is alerted and the honeypot is rendered inactive.
A honeypot is just one security product. As the scale and complexity of cyberattacks increase, a single honeypot may not be enough to lure and engage a cyberattacker. On the other hand, it may be adequate to prompt an attacker to quickly leave. A deception technology strategy protects an enterprise's true assets while diverting attention to false ones, all the while studying the attacker's strategies, tactics, and behaviors to strengthen the enterprise's defenses for next time.
A standalone honeypot may not provide enough of an incentive for today's sophisticated cyberattacker. It may also not provide enough data to help IT security become stronger.
Dynamic Deception and Its Importance
The benefits of deception technology include minimizing damage to a network and the ability to observe and study the real-world tools used by cyber criminals. However, deception technology needs to be sophisticated enough to be convincing—it must create an environment that is indistinguishable from an organization's true environment.
IT teams can lean on machine learning (ML) and artificial intelligence (AI) to adjust the environment dynamically as the assault on the decoy assets occurs. These changes can be similar to the changes IT sees—and what the cyberattacker is also likely to see—in network automation, network access control, or user and entity behavior analytics (UEBA) programs. ML and AI can create these dynamic deception environments that free the IT team from constantly creating specialized, standalone deception campaigns.
Additionally, cybersecurity deception technology can be layered with additional tools that help IT security teams identify cyber criminals. For example, a database of fake credentials can have tracking information embedded in the files. Opening a file can trigger an alert to the organization or to law enforcement officials. Also, sink-hole servers can be used for traffic redirection, tricking bots and malware into reporting to law enforcement rather than to their owner, the cyberattacker.
How Fortinet Can Help
FortiDeceptor is the Fortinet solution that enables organizations to create a fabricated deception network. FortiDeceptor provides automatic deployment of decoy assets, enticing attackers to engage long enough for IT to capture vital data before thwarting the attack. FortiDeceptor integrates with an enterprise's existing infrastructure, removing the need to purchase and provision special endpoints and servers to create the fabricated environment. |
One of the biggest risks with software security is the opaque nature of verification tools and processes, and the potential for false negatives not covered by a particular verification technique (e.g. automated dynamic testing).
Despite many best practices around secure Software Development Lifecycle (SDLC) most organizations tend to primarily rely on testing to build secure software. One of the most significant byproducts from current methods of testing is that organizations rarely understand what is being tested – and more importantly – what is NOT being tested by their solution. Our research suggests that any single automated assurance mechanism can verify a maximum of 44% security requirements. The NIST Static Analysis Tool Exposition found that all static analysis tools combined reported warnings on 4 out of 26 known vulnerabilities in Tomcat. Because the practice of relying on opaque verification processes is so pervasive, it has become the industry standard and consequently many organizations are content with testing as the primary means to secure software.
Suppose, for example, you hire a consultancy to perform a penetration test on your software. Many people call this testing “black box” based on the QA technique of the same name, where testers do not have detailed knowledge of the system internals (e.g. system code). After executing the test, the firm produces a report outlining several vulnerabilities with your application. You remediate the vulnerabilities, submit the application for re-testing, and the next report comes back “clean” – i.e. without any vulnerabilities. At best, this simply tells you that your application can’t be broken into by the same testers in the same time frame. On the other hand, it doesn’t tell you:
- What are the potential threats to your application?
- Which threats is your application “not vulnerable” to?
- Which threats did the testers not assess your application for? Which threats were not possible to test from a runtime perspective?
- How did time and other constraints on the test affect the reliability of results? For example, if it the testers had 5 more days, what other security tests would they have executed?
- What was the skill level of the testers and would you get the same set of results from a different tester or another consultancy?
In our experience, organizations aren’t able to answer most of these questions. The black box is double-sided: the tester doesn’t understand application internals and the organization requesting the test doesn’t know much about the security posture of their software. We’re not the only ones who acknowledge this issue: Haroon Meer discussed the challenges of penetration testing at 44con. Most of these issues apply to every form of verification: automated dynamic testing, automated static testing, manual penetration testing, and manual code review. In fact a recent paper describes similar challenges in source code review.
Examples of Requirements
To better illustrate this issue, let’s take a look at some common high-risk software security requirements and examine how common verification methods apply to them.
Requirement: Hash user passwords using a secure hashing algorithm (e.g. SHA-2) and a unique salt value. Iterate the algorithm multiple times.
How common verification methods apply:
- Automated run-time testing: Unlikely to have access to stored passwords, therefore unable to verify this requirement
- Manual run-time testing: Only able to verify this requirement if another exploit results in a dump of stored passwords. This is unreliable, therefore you cannot count on run-time testing to verify the requirement
- Automated static analysis: Only able to verify this requirement under the following conditions:
- The tool understands how authentication works (i.e. uses a standard component, such as Java Realms)
- The tool understands which specific hashing algorithm the application uses
- The tool understands if the application uses unique salt values for each hash
In practice, there are so many ways to implement authentication that it is unrealistic to expect a static analysis tool to be able to verify this requirement across the board. A more realistic scenario is for the tool to simply recognize authentication and point out that secure hashing and salting are necessary. Another scenario is for you to create custom rules to identify the algorithm and hash value and verify they meet your own policy, although in our experience this practice is rare.
- Manual code review: The most reliable common verification method for this requirement. Manual assessors can understand where authentication happens in the code, and verify that hashing and salting meets best practices.
Requirement: Bind variables in SQL statements to prevent SQL injection
SQL Injection is one of the most devastating application vulnerabilities. A recent flaw in Ruby on Rails allowed SQL Injection for applications built on its stack.
How common verification methods apply:
- Automated run-time testing: While run-time testing may be able to find the presence of SQL injection by analyzing behavior, it cannot verify the absence of it. Therefore, automated testing run-time testing cannot verify this requirement completely
- Manual run-time testing: Same limitations as automated run-time testing
- Automated static analysis: Generally able to verify this requirement, particularly if you are using a standard library to access a SQL database. The tool should be able to understand if you are dynamically concatenating SQL statements with user input, or using proper variable binding. There is a chance, however, that static analysis may miss SQL injection vulnerabilities in the following scenarios:
- You use stored procedures on the database and are unable to scan the database code. In some circumstances, stored procedures can be susceptible to SQL injection
- You use an Object Relational Mapping (ORM) library which your static analysis tool does not support. ORMs can also be susceptible to injection.
- You use non-standard drivers / libraries for database connectivity, and the drivers do not properly implement common security controls such as prepared statements
- Manual code review: Like static analysis, manual code review can confirm the absence of SQL injection vulnerabilities. In practice, however, production applications may have hundreds or thousands of SQL statements. Manually reviewing each one can be very time consuming and error prone.
Requirement: Apply authorization checks to ensure users cannot view another user’s data.
How common verification methods apply:
- Automated run-time testing: By accessing data from two different users and then attempting to access one user’s data from another user’s account, automated tools can perform some level of testing on this requirement. However, these tools are unlikely to know which data in a user’s account is sensitive or if changing the parameter “data=account1” to “data=account2” represents a breach of authorization.
- Manual run-time testing: Manual run-time tests are generally the most effective method of catching this vulnerability because human beings can have the domain knowledge required to spot this attack. There are some instances, however, where a runtime tester may not have all of the information necessary to find a vulnerability. For example, if appending a hidden parameter such as “admin=true” allows you to access another user’s data without an authorization check.
- Automated static analysis: Without rule customization, automated tools are generally ineffective in finding this kind of vulnerability because it requires domain understanding. For example, a static analysis tool is unable to know that the “data” parameter represents confidential information and requires an authorization check.
- Manual code review: Manual code review can reveal instances of missing authorization that can be difficult to find with run-time testing, such as the impact of adding an “admin=true” parameter. However, actual verifying the presence of authorization checks with manual code review can be laborious. An authorization check can appear in many different parts of code, so a manual reviewer may need to trace through several different execution paths to detect the presence or absence of authorization.
Impact to you
The opaque nature of verification means effective management of software security requirements is essential. With requirements listed, testers can specify both whether they have assessed a particular requirement and the techniques they used to do so. Critics argue that penetration testers shouldn’t follow a “checklist approach to auditing” because no checklist can cover the breadth of obscure and domain-specific vulnerabilities. Yet the flexibility to find unique issues does not obviate the need to verify well understood requirements. The situation is very similar for standard software Quality Assurance (QA): good QA testers both verify functional requirements AND think outside the box about creative ways to break functionality. Simply testing blindly and reporting defects without verifying functional requirements would dramatically reduce the utility of quality assurance. Why accept a lower standard from security testing?
Before you perform your next security verification activity, make sure you have software security requirements to measure against and that you define which requirements are in-scope for the verification. If you engage manual penetration testers or source code reviewers, it should be relatively simple for them to specify which requirements they tested for. If you use an automated tool or service, work with your vendor to find out what requirements their tool or service cannot reliably test for. Your tester/product/service is unlike to guarantee an absence of false negatives (i.e. certify that your application is not vulnerable to SQL injection), but knowing what they did and did not test for can dramatically help increase the confidence that your system does not contain known, preventable security flaws. |
January 24, 2015
As I know many friends which are Hypo Tirol banking customers and are using the mobile banking app – and my wife is on a business trip and its dark outside – I took a short look at the mobile banking app for Android. And “Oh my God” the same mistakes banks made 10 years ago with online banking are made again.
I downloaded the app and launched it … I got to following
So what does Wireshark tell me after the I started the app?
Yes, there is some (most part) HTTP … so lets open the URL on my PC.
so the whole starting GUI of the banking app is transferred from the server via HTTP.
An attacker can use this to change the content to his liking and as the URL is not shown in the app it could be anything. An idea would be a site that looks like the banking site. The link “Mobile Banking” goes to the HTTPS URL
The attacker just can copy and paste the pages and change the links, so it looks identical for the user ;-). So the only question remains – how an attacker can change the content:
- The DNS servers return the IP address of the attacker for mobile.hypotirol.com
- there are many know worms that change the DNS server settings of consumer internet routers
- DNS poisoning attacks … seen in the wild for banking attacks
- A Man in the Middle attack on a public Wifi, but the first two are much easier and can be exploited remotely.
Use HTTPS everywhere – no HTTP. And check the certificates.
January 17, 2015
This is the first post in over a month, why? As always I was at the Chaos Communication Congress in Hamburg and as I came back there was finally snow –> so I went ski mountaineering. Anyway here is new post, as today its raining so let’s write a post.
This post is about the lack of security awareness at the major tyrolian news paper Tiroler Tageszeitung (in short TT). So lets start why I believe that is true. To be more accurate what I found within 5minutes of looking – it took much longer to write this post.
The subscriber area
When you access
http://user.tt.com/ you get following Login prompt.
But look above ….
Yes, this site is not HTTPS protected. This is generally not a good idea as an attacker is able to change the URL the passwords are sent to after pressing the login button. But Ok, in 2011 that was not that bad, bad but not that bad. Why I talk about 2011 I’ll tell you later.
So lets enter our mail address and password and click the login button. What request is send?
- It is HTTP and not HTTPS? In 2014 using HTTP for login? That was even in 2011 bad.
- They are using HTTP GET with the password as parameter. I can’t believe it. Why? GET parameters are logged on web servers and even worse on proxy servers. Newer, Newer summit passwords with GET, use POST and use HTTPS!
So reading the online TT while waiting for something in a public WiFi network (which is most likely unencrypted) is not a good idea. How many TT users are reusing their password (the email address is a given) ? How may users a potentially affected?
At least I’m able to answer the second question. There is the Österreichische Auflagenkontrolle (ÖAK) … which counts how many copies of a given print media are sold.
Thats from 2012, the ones from 2013 are sightly smaller but not that formated that nicely for showing a screenshot here. So over 80.000 affected users. The state of Tirol has about 720.038 citizens according to Wikipedia. So over 10% of the population is affected.
The server side
While looking at the get request I found something else interesting. At least the
user.tt.com server seems to be running Debian Lenny.
Why is that important? Let’s go to the Debian Wiki and have a look.
Basically we could own the user.tt.com server easily, but whats about the other servers. Are they better? What is obvious from the start that the servers for the main site are different ones and they are using Varnish as is an HTTP accelerator and the learned to hight the Apache version in the HTTP header.
A short look in the Whois shows that the
user.tt.com seems to be hosted by the TT itself and the frontend server for
www.tt.com by the APA guys. It seems that they are filtering the bad stuff from the backend TT servers. As I didn’t want to dig deeper than whas possible in 5 minutes I stopped here … Just one thing I found which is not security related: tt.com is heavy using Google services for example Google Analytics.
The option _anonymizeIp() is missing here to not violate the Austrian data protections law and you need to post a information for your visitors (could not find one on tt.com) and make a opt-out possible.
So much for my 5 minutes analytics of the Tiroler Tageszeitungs homepage.
July 18, 2014
Many people got new debit cards (called “Bankomatkarte in Austria) from the various bankings institutes in the last months and years. Many cards are PayPass enabled for wireless money transactions. PayPass is based on NFC, which is also integrated in some of the modern smart phones. The default setting is that five 25 Euro transactions can be done without entering a PIN. So a possible damage can be up to 125 Euro. You’ll verify if your debit card supports that standard by checking it has PayPass printed on it.
Picture: Maestro PayPass
But I’ve seen some cards with only this symbol (at least on the front side):
Anyway in therapy the card needs to be within 10cm of the reader and therefore an attack is not that easy. But already at Defcon 20 in 2012 Eddie Lee presented the possible of a NFCProxy which allows to misuse a card. The attack setup looks like this:
Picture: Eddie Lee @ Defcon 20
So this allows following attack vector. You’re standing in a crow or in a line and have your debit card in your back pocket. One of the attacker stands behind you …. and the other can be e.g. hundred meters away (only limited by the delay and reach of the network connection). They will be able to get your money with much less risk than with pocket picketing. And to make it even better – you can download the App for Android as an .apk file, ready to install and use, from Sourceforge.
So now you know of the problem, what can you to mitigate that problem?
- If you don’t need that feature at all, try to talk to your bank to disable that function. Some will do it for free, others will charge you. Some banks allow to you to choose if you want one with or without at renewal of your card.
- You basically like the feature, but you would like to have more control over it – thats also possible:
- Search for RFID/NFC blocking sleeves for credit card or payment cards
- You can get also wallets with RFID/NFC blocking feature … but currently they look not that great .. at least the ones I found
February 16, 2014
Originally I only wanted to look at the traffic to check why it took so long on my mobile, but than I found some bad security implementations.
1. The web service is password protected, but the password which is the same for all copies of the app is send in the clear
Just look at the request which is send via HTTP (not HTTPS) to the server. Take the string and to a base64 decoding and you get: client:xxxxxx – oh thats user name and password and its the same for any copy of the app.
2. We collect private data and don’t tell our users for what
The app asks following question “Um in den vollen Genuss der Vorzüge dieser App zu kommen, können Sie sich bei uns registrieren. Wollen Sie das jetzt tun? / To get the full use of the app you can register. Do you want to register now?” at every launch until you say yes.
But for what feature do you need to register? What happens with the data you provide? There is nothing in the legal notice of the app. I’m also missing the DVR number from the Austrian Data Protection Authority. Also a quick search in the database didn’t show anything. Is it possible they forgot it?
3. We don’t care about private data which is given to us
The private data you’re asks at every launch until you provide it, is send in the clear through the Internet. A SSL certificate was too expensive?
4. We are generating incremented client IDs to make it easy to guess the IDs of other users
At the first launch of the app on a mobile, the app requests an unique ID from the server which is not something random and not guessable. No its just a incremented integer (can’t be the primary key of the database table?), at least my tests showed this … the value got only bigger and not that much bigger, every time.
And as the image at point 3 shows that everything someone needs to change the user data on the server for an other user is this number, a small script which starts from 1 up to the 20.000 would be something nice …… the question is what else can you do with this ID? Should I dig deeper?
5. We’re using an old version of Apache Tomcat
The web service tells everyone who wants to know it, that its running on an Apache Tomcat/6.0.35. There are 7.0 and 8.0 releases out already, but the current patch release of 6.0 is 6.0.39 released 31 January 2014. But its worse than that, 6.0.35 was released on 5 Dec 2011 and replaced on the 19 Oct 2012 with 6.0.36. Someone not patching for over 2 years? No can’t be, the app is not that old. So an old version was installed in the first place?
ps: If you’re working with Ubuntu 12.04 LTS package … Tomcat is in universe not main … no official security patches.
This are my results after looking at the app for a short period of time … needed to do other stuff in between
For some time now a mobile app for Andriod phones and iPhones is advertized which is called the official app of Tirol’s Avalanche Warning Service and Tiroler Tageszeitung (Tirol Daily Newspaper), so I installed it on my Android phone some days ago. Yesterday I went on a ski-tour (ski mountaineering) and on the way in the car I tried to update the daily avalanche report but it took really long and failed in the end. I thought that can’t be possible be, as the homepage of the Tyrol’s Avalanche Warning Service worked without any problems and was fast.
So when I was home again I took a closer look the traffic the app sends and receives from the Internet … as I wanted to know why it was so slow. I installed the app on my test mobile and traced the traffic it produced on my router while it launched the first time. I was a little bit shocked when I look at the size of the trace – it was 18Mbyte big. Ok this makes it quite clear why it took so long on my mobile –> So part of the post series will be getting the size of the communication down , so I opened the trace in Wireshark and took at look at it. First I checked where the traffic was coming from.
So my focus was one the 188.8.131.52 which was the IP address of tirol.lawine-app.com and it is hosted by a German provider called Hetzner (you can rent “cheap” servers there). As I opened the TCP stream I saw at once a misconfiguration. The client supports gzip but the server does not send gzipped.
Just for getting the value how much it would save without any other tuning I gzipped the trace file and I got from 18.5Mbyte to 16.8Mbyte – 10% saved. Than I extracted all downloaded files. jpg files with 11Mbyte and png files with 4,3Mbyte … so it seems that saving there will help the most. Looking at the biggest pictures leaded to the realization that the jpg images where saved in the lowest compress mode. e.g. 2014-02-10_0730_schneeabs.jpg
- 206462 Bytes: orginal image
- 194822 Bytes: gimp with 90% quality (10% saving)
- 116875 Bytes: gimp with 70% quality (40% saving)
Some questions also arose:
- Some information like the legend are always the same … why not download it only once and reuse until the legend gets update?
- Some big parts of the pictures are only text, why not sent the text and let the app render it?
- The other question is why are the jgep files 771 x 566 and the png files 410×238 showing the same map of Tirol? Downsizing would save 60% of the Size (with the same compression level)
- Why are some maps done in PNG anyway? e.g. 2014-02-10_0730_regionallevel_colour_pm.png has 134103 Bytes, saving it as jpeg in gimp with 90% quality leads to 75015 Bytes (45% saving)
So I tried to calculate the savings without minimizing the information that are transferred – just the representation and it leads to over 60% .. so instead of 18Mbyte we would only need to transfer 7Mbyte. If the default setting would be changed to 3 days instead of 7, it would go even further down, as I guess most people look only on the last 3, if even that. So it could come down to 3-4 Mbyte … that would be Ok, so please optimize your software!
I only wanted to make one post about this app, but then I found, while looking at the traffic, some security and privacy concerns I need to look into a bit closer …. so expect a part 2.
January 19, 2014
There seams to be a virus wave here in Austria and Germany, don’t really know why but somehow many people click on the links and download the malware. Maybe its because the mail is a faked invoice from some well known (mobile) telecommunication providers and are written in good German – normally spam like this written in broken German. And it seams that the mail passed anti spam systems as I got the some mails on the cooperate account and at home .. normally I don’t get spam mails for month.
Anyway, while I was driving home today it was even in the local radio news .. one of the top items there. And when I was home a relative, which is not that close by called me and asked be how to get ride off that virus. He got infected as initially his anti virus didn’t detected it. I recommend him following link from Raymond. Its a comprehensive list of 26 bootable antivirus rescue CDs for offline scanning. I recommend him to use at least two of the following from the list.
- Bitdefender Rescue CD
- Kaspersky Rescue Disk
- F-Secure Rescue CD
- Windows Defender Offline
So if you get asked the same from your relative you don’t need to search further.
November 5, 2013
Yesterday I wrote about the the information leak at the Railjet Wifi. Today I’m traveling back to Tirol again with a Railjet and I found something other disturbing. I believe its even more problematic as it concerns the mail system. I used a openssl client to check various SSL and TLS connections to my servers, and when I called following:
$ openssl s_client -connect smtp.xxx.at:25 -starttls smtp
I got something I didn’t expect:
didn't found starttls in server response, try anyway...
Hey, my server does not support STARTTLS? I’m sure it does. I did a SSH to a server of mine and checked typed the same command and got my server certificate complete with chain. So something is not right here. I switched to Wireshark (which is running all the time … Ok, I launched it ) and looked at the traffic:
server: 220 profinet.at SurgeSMTP (Version 6.3c2-2) http://surgemail.com
client: EHLO openssl.client.net
server: 250-profinet.at. Hello openssl.client.net (184.108.40.206)
server: 250-AUTH LOGIN PLAIN
server: 250-X-ID 5043455352563431333833323030373135
server: 250-SIZE 50000000
server: 250 HELP
server: 500 Sorry SSL/TLS not allowed from (220.127.116.11)
Hey? Thats not my mail server. Its not my IP address and its sure not the mail server software I use. WTF?
Someone is intercepting my SMTP traffic and if my mail clients would use the default setting (use TLS if possible) I would now send my login data (which is for most people the same as for fetching mails) in the clear over an unprotected WiFi. Block port 25 if you have fear of spammers, but don’t force unencrypted traffic over a open wifi.
Anyway whats that profinet.at stuff …. can’t be profi as in professionals. The Whois tells following:
Organisationsname: OeBB Telekom Service GmbH
Strasse: Bruenner Strasse 20
Ok, thats the OeBB by itself. Real experts.
So keep an eye on your SMTP/IMAP configuration and make sure you’re forcing TLS/SSL otherwise someone in the same train is seeing your data.
November 4, 2013
Today I traveled with the OEBB Railjet which provides a free WiFi. As the journey took some hours I had time to look at my networks traces and found something. After the captive portal with the Terms of Services was acknowledged, a page with some infos is shown. One of the infos is the original URL the user requested. If the users clicks on the link a separate tab opens with the page. The problem is that the URL the browser was given to access this info page has following format:
Which is sent as referrer to the original requested page if you click onto the link. As you see this referrer contains the full MAC address of the requesting device. Normally the MAC address is only visible via Layer 2 but with the information leak in my case www.orf.at knows my MAC address and if I have already gotten a cookie, they could add now my MAC to the list of know IDs. Ok, I guess the ORF doesn’t do that, but others might.
A solution would be simple for the OEBB, but until then don’t click on this link – type the URL again.
November 10, 2012
In my last blog post I have shown how to connect to a PPPoA provider with a Mikrotik router and get the public IP address on the router. I also mentioned that my provider has the bad habit of disconnecting every 8h. As thats not exactly 8h, it tends to wander, but I want at least always the same times. This blog post shows you how to do that if you want the same.
What the script basically does is to force a reconnect at a given time once a day. First we need to make sure that we’ve the correct time on the router. The simplest way to do that is following line:
/system ntp client set enabled=yes mode=unicast primary-ntp=18.104.22.168
But you can only use an IP address there, if you want DNS names take a look hat this script. Also verify that you’ve configured the correct time zone with this command:
/system clock set time-zone-name=Europe/Vienna
Verify the current time with
[admin@MikroTik] > /system clock print
Now we need to write the script, which we to in 2 steps. First we create the script ….
/system script add name=scriptForcedDslReconnect source=""
… than we open it in the editor and add the actual code
[admin@MikroTik] > /system script edit 0
After this you get an editor and just copy and paste following lines:
/interface pptp-client set [find name="pptpDslInternet"] disabled=yes
/interface pptp-client set [find name="pptpDslInternet"] disabled=no
/log info message="pptpDslInternet forced reconnect. Done!"
CRTL-O. You can now check if all is correct with (everything should be colored in the script)
/system script print
Now we only need to add it to the scheduler
/system scheduler add name=schedularForcedDslReconnect start-time=00:40:00 interval=24h on-event=scriptForcedDslReconnect
And we’re done, it will disconnect always at 00:40, 8:40, 16:40 … as we wanted.
November 4, 2012
I live in Austria and the biggest Internet provider is A1 Telekom Austria and they use PPPoA and not PPPoE. I’ve searched through out the Internet to find some documentation on how to configure a Mikrotik router for this. I wanted to have the public IP address on the Mikrotik and not on the provider router/modem. I did not find any documentation. But as I got it working I’ll provide such a documentation now.
1. The Basics
PPPoA is the abbreviation for PPP over ATM or some say PPP over AAL5 and it is used to encapsulate PPP into ATM cells to get into the Internet via ADSL connections. The more commonly used standard in this space is PPPoE (PPP over Ethernet), but which has somewhat more overhead as you need also to encapsulate the Ethernet header too.
There are now two possibilities:
The first is that the provider modem/router handles everything and you get only a private IP address behind the router, and the router masquerade the private IP addresses. This is normally the default as it works for 95% of the customers but your PC or own router does not get a public IP address. You need to use port forwarding if you want to provide services which are reachable from the Internet. And something which I specially need. You don’t get a event when you get disconnected and assigned a new IP address. A1 Telekom Austria has the bad habit to disconnect you every 8 hours … 3 times a day. As I want to have the disconnects always at the same time I need my own router to time it once a day, so it gets reseted to my desired reconnect times.
The second way it to get somehow the public IP address on the PC or router. In this case your need a provider modem/router with a PPPoA-to-PPTP-Relays. Take a look at the picture I took from the German Wikipedia(CC-BY-SA-3.0, Author Sonos):
The computer (or Mikrotik router) thinks it establishes a PPTP tunnel with the modem, but instead the modem encapsulates the packets and send them on via ATM to the provider backbone. So the computer or Miktrotik router does not need to be able to talk PPPoA it is enough if it is able to talk PPTP, the rest is handled by the modem.
But of course there are some requirements:
- The provider modem needs to be able to make a PPPoA-to-PPTP-Relays and which is important you need to be able to configure it, as some provider firmwares restrict that.
- You need to know the username and password which is used for the ppp authentication
- And for the sake of completeness – you need a Mikrotik router
3. Provider modem / router
My provider gave me a Thomson Speedtouch TG585 v7 modem/router. The firmware is old (22.214.171.124) and branded but I was able to upload a new configuration via the web interface.
And as it works stable I did not see a reason to upgrade. I found in the Internet a INI file, which configured the router to PPPoA-to-PPTP-Relays mode. Three important notes:
- If you search the Internet for a configuration file … look for “single user” or “single user mode” (SU), the masquerade mode is called “multi user mode” (MU)
- It is also possible to configure the single user mode via telnet, there are some howto’s out there. The specific ones for Austria are of course in German.
- The version numbering is quite broken. The A1 Telekom Austria branded firmwares are often higher (e.g. 126.96.36.199) than the newer generic firmwares (e.g 188.8.131.52_AA).
After configuring the router as PPPoA-to-PPTP-Relays it has the IP address 10.0.0.138/24 for my setup.
4. Mikrotik PPP configuration
So now to the Mikrotik configuration … we start with resetting the configuration with no defaults.
/system reset-configuration no-defaults=yes
Then we rename the first interface and add a transit network IP address
/interface ethernet set 0 name=ether1vlanTransitModem
/ip address add address=10.0.0.1/24 interface=ether1vlanTransitModem
And now we only need to configure the PPTP
/ppp profile add change-tcp-mss=yes name=pppProfileDslInternet use-compression=no use-encryption=no use-vj-compression=no
/interface pptp-client add add-default-route=yes connect-to=10.0.0.138 disabled=no name=pptpDslInternet password=YourPassword profile=pppProfileDslInternet user=YourUsername
this configuration should lead after connecting the ether1 with the modem to following log entries:
[admin@MikroTik] > /log/print
00:29:03 pptp,ppp,info pptpDslInternet: initializing...
00:29:03 pptp,ppp,info pptpDslInternet: dialing...
00:29:05 pptp,ppp,info pptpDslInternet: authenticated
00:29:05 pptp,ppp,info pptpDslInternet: connected
you should see the IP address too:
[admin@MikroTik] > /ip route print
Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, b - bgp, o - ospf, m - mme, B - blackhole, U - unreachable, P - prohibit
# DST-ADDRESS PREF-SRC GATEWAY DISTANCE
0 ADS 0.0.0.0/0 xxx.xxx.xxx.xxx 1
1 ADC 10.0.0.0/24 10.0.0.1 ether1vlanTrans... 0
2 ADC xxx.xxx.xxx.xxx/32 yyy.yyy.yyy.yyy pptpDslInternet 0
But if you try to ping something you’ll get
[admin@MikroTik] > ping 184.108.40.206
HOST SIZE TTL TIME STATUS
sent=2 received=0 packet-loss=100%
whats the problem? the router uses the wrong source IP address, try following (the xxx.xxx.xxx.xxx is the IP address from
/ip route print (entry 2) )
[admin@MikroTik] > /ping src-address=xxx.xxx.xxx.xxx 220.127.116.11
HOST SIZE TTL TIME STATUS
18.104.22.168 56 46 37ms
22.214.171.124 56 46 36ms
126.96.36.199 56 46 37ms
188.8.131.52 56 46 37ms
184.108.40.206 56 46 37ms
220.127.116.11 56 46 37ms
sent=6 received=6 packet-loss=0% min-rtt=36ms avg-rtt=36ms max-rtt=37ms
Now the Internet connection is working, we just need to make it usable ….
5. Mikrotik on the way to be usable
The first thing we need is a masquerade rule that we use the correct IP address into the Internet, following does the trick.
/ip firewall nat add action=masquerade chain=srcnat out-interface=pptpDslInternet
But we want also a client to test it … so here is the configuration I use for the clients (without explanation as it is not the topic of this Howto)
/interface ethernet set 2 name=ether3vlanClients
/ip address add address=10.23.23.1/24 interface=ether3vlanClients
/ip dns set allow-remote-requests=yes servers=18.104.22.168,22.214.171.124
/ip dns static add address=10.23.23.1 name=router.int
/ip pool add name=poolClients ranges=10.23.23.20-10.23.23.250
/ip dhcp-server add address-pool=poolClients authoritative=yes disabled=no interface=ether3vlanClients name=dhcpClients
/ip dhcp-server network add address=10.23.23.0/24 dns-server=10.23.23.1 domain=int gateway=10.23.23.1
Connect a client behind it, set it to DHCP and everything should work. I hope this Howto demystifies PPPoA and Mirkotik. |
Firewall Id ID of the corresponding
entry in the database tab. Events are assigned their IDs in the
order they take place, e.g: Event 1,
Event 2, etc.
Client Name - name identifying a client
in ERA. New clients use the value Computer
Name. Client Name can be modified with no side effects.
Computer Name - name of computer where
the event took place
MAC Address MAC address of client
workstation reporting th event
Primary Server name of ERA Server, to
which the given workstation is connected. If not supplied during
the installation of ERA Server, it equals to the computen name
where it is running.
Date Received exact date nad time of
reception of the event by ERA Server
Date Occured - exact date nad time of
when the event took place on workstation
Level emergency level of the event
Event action taken for the event
Source IP address initiating the
Target target IP address of the
Protocol type of protocol used in
Rule rule created
Application application concerned
User name of the user logged in when
the incident occurred
Click Copy to Clipboard to copy the
above mentioned information to clipboard. |
Thank you for your interest. We Will Contact You Soon...
Your email ID is already registered with us.
AWS Network Firewall Versus Azure Firewall: An Overview and Key Features
Cyber Security, Cloud - February 25, 2022
With cyberattacks becoming more prevalent on a daily basis, it is critical to safeguard your
applications and networks on-Prem or in cloud with a security device to protect against attacks
that originate from outside trying to breach the perimeter. While offering extensive access
control, network firewalls can defend your network and application against dangers such as
malware, botnets, and DDoS assaults.
There are two methods for incorporating an advanced firewall into your network: through the use
of a physical security device or the use of a software-based firewall. In the classic enterprise
model, network traffic is routed through a physical cybersecurity device which is changing with
the cloud services and application hosting.
The software-based firewalls, which are gaining popularity due to several advantages like
versatility, cost, deployment, configuration and maintenance ease. Additionally, they are
quicker to learn. The enterprise cloud firewall market is dominated by two big competitors.
These are the Azure and AWS Firewalls.
Let us examine their distinguishing characteristics.
Azure Network Firewall
Azure Firewall is a cloud-based, managed security service that secures the resources in your
Azure Virtual Network. It comes with high availability and unconstrained cloud scalability built
in. You may create, enforce, and log policies for apps and network connections across
subscriptions and virtual networks centrally. Azure Firewall assigns your virtual network
components a static public IP address, which enables external firewalls to detect traffic coming
from your virtual network. For monitoring and analysis, the service is completely integrated
with Azure Monitor.
Azure Firewall includes the following capabilities:
Scalability: Azure Firewall can scale up to meet changing network traffic flows, so you
don't have to account for peak traffic.
Filtering criteria for application FQDNs: You can specify a list of fully qualified
domain names (FQDNs) for outbound HTTP/S traffic, including wild cards. This functionality is
self-contained and does not need SSL termination.
Filtering rules for network traffic: Allow or refuse network filtering rules may be
created centrally by source and destination IP address, port, and protocol. Azure Firewall is
entirely stateful, which enables it to differentiate between legal packets for various sorts of
connections. Across numerous accounts and virtual networks, rules are enforced and logged.
FQDN identifiers: FQDN tags make it simple to let traffic from well-known Azure service
networks over your firewall. For instance, suppose you wish to enable network traffic from
Windows Update to pass through your firewall. You add the Windows Update tag to an application
rule. Now, Windows Update network activity can get via your firewall.
Support for outbound SNAT: The IP addresses of all outgoing virtual network traffic are
converted to the public IP address of the Azure Firewall (Source Network Address Translation).
You can detect and permit traffic to and from remote Internet destinations that originates in
your virtual network.
DNAT assistance: Inbound data transmission to your firewall's public Network is converted
and redirected to the private IP addresses on your virtual networks using DNS (Destination
Network Address Translation).
Logging in Azure Monitor: All events are linked with Azure Monitor, which enables you to
store logs in a storage server, stream them to an Event Hub, or transmit them to Log Analytics.
Amazon Web Services Firewall
AWS Network Firewall eases the procedure of implementing critical network security for all your
Virtual Private Clouds (VPCs). The service is simple to configure, and scales automatically
based on your network activity, so you don't have to worry about building or managing any
architecture. The configurable rules engine in AWS Network Firewall enables you to create
firewall rules that provide fine-grained control over network traffic, such as limiting outbound
Server Message Block (SMB) queries to prevent the spread of harmful behaviour. Additionally, you
may import rules defined in commonly used open-source rule formats and allow interfaces with
managed intelligence feeds provided by AWS partners. AWS Network Firewall provides a web based
Firewall console, enabling you to create policies Network communication rules and then apply
them centrally across your VPCs and accounts.
Inspect traffic between VPCs
AWS Network Firewall inspects and assists in controlling traffic across VPCs to conceptually
isolate networks running critical applications or line-of-business workloads. AWS Network
Firewall's stateful visibility at the network and application levels enables it to provide
fine-grained network security controls for VPCs that are linked via AWS Transit Gateway.
Outbound traffic filtration
AWS Network Firewall enables outward traffic filtering by URL/domain name, IP address, and
content to prevent data loss, assist in meeting regulatory standards, and block known malware
instances. AWS Network Firewall provides hundreds of rules that may be used to block network
traffic from known malicious IP addresses or domain names.
Secure AWS Direct Connect and VPN communications
AWS Network Firewall secures AWS Direct Connect and VPN traffic between AWS Direct Connect and
client devices and on-premises environments that employ AWS Transit Gateway.
Internet traffic filtering
AWS Network Firewall assists in preventing intrusions by analysing all inbound Internet traffic
with capabilities such as Access Controls (ACL) rules, stateful surveillance, protocol
recognition, and intrusion prevention.
Both AWS and Azure follow a pay-as-you-go model for firewalls. You pay an hourly rate for each
firewall endpoint and a data processing fee per gigabyte of data processed by the firewall. The
price you pay for AWS services is entirely dependent on the use case and deployment environment.
In case of Azure, threat intelligence is provided by the in-house Microsoft Security Threat
labs. Additionally, Azure'’s firewall is HIPAA-compliant and an ICSA-certified network firewall.
Cloud services and infrastructure are becoming critical components of your company’s
infrastructure and storage - this calls for secure firewall solutions that prioritise
operability and dependability. Firewall services built for Microsoft Azure and Amazon Web
Services (AWS) offer this level of security and support to organisations looking to protect
their data and apps – particularly those with less sophisticated requirements.
Thomas Harpham earns 30+ years of industry experience in Networking & Security Solutions and
extensive consulting experience with many small, medium, and large enterprises. Thomas has
worked with various clients across the globe to generate the solutions to meet their
requirements that include resiliency and scalability in the networking and security areas of IT,
leveraging existing and new tools.
experience and provide personalized recommendations. By continuing to use our website, you agree to our |
Researchers have discovered a new cutting-edge piece of Android malware that finds sensitive information stored on infected devices and sends it to attacker-controlled servers.
The app disguises itself as a system update that must be downloaded from a third-party store, researchers at security firm Zimperium said Friday. In fact, it is a remote access trojan that receives and executes commands from a command-and-control server. It offers a full-featured spy platform that performs a wide range of malicious activities.
Soup to nuts
Zimperium listed the following possibilities:
- Steal instant messenger messages
- Stealing instant messenger database files (if root is available)
- Inspect the default browser’s bookmarks and searches
- Inspect the bookmark and search history of Google Chrome, Mozilla Firefox and Samsung Internet Browser
- Search for files with specific extensions (including .pdf, .doc, .docx and .xls, .xlsx)
- Inspect the clipboard data
- View the content of the notifications
- Record audio
- Record phone calls
- Take regular photos (either through the front or rear camera)
- List of installed applications
- Stealing images and videos
- Tracking the GPS Location
- Steal text messages
- Steal phone contacts
- Steal call logs
- Exfiltrating device information (e.g. installed applications, device name, storage statistics)
- Hide its presence by hiding the icon in the drawer/menu of the device
Messaging apps vulnerable to database theft include WhatsApp, which billions of people use, often with the expectation that it offers more confidentiality than other messengers. As noted, the databases are only accessible if the malware has root access to the infected device. Hackers can root infected devices when using older versions of Android.
If the malicious app does not acquire root, it can still collect conversations and message details from WhatsApp by tricking users into enabling Android accessibility services. Accessibility services are controls built into the operating system that make it easier for visually or otherwise impaired users to use devices, for example by customizing the display or having the device provide spoken feedback. Once accessibility services are enabled, the malicious app can scrape content on WhatsApp screen.
Another possibility is stealing files stored in a device’s external storage. To reduce bandwidth consumption that could alert a victim that a device is infected, the malicious app steals thumbnail images, which are much smaller than the images they correspond to. When a device is connected to Wi-Fi, the malware sends stolen data from all folders to the attackers. When only a cellular connection is available, the malware transmits a more limited set of data.
As comprehensive as the spy platform is, it has one major limitation, which is the inability to infect devices without first tricking users into making decisions that more experienced people know aren’t safe. First, users must download the app from an external source. As problematic as Google’s Play Store is, it’s generally a more reliable place to get apps. Users must also be socially engineered to enable accessibility services for some advanced features to work.
Google declined to comment, except to reiterate that the malware was never available in Play. |
Those in my generation remember the famous Ronald Reagan quote related to relations with Russia: "Trust, but verify." This was a good approach when dealing with Russia, but we have not adopted this model in the information security world. Instead, the approach has been trust, OR verify. Networks have traditionally been designed with trusted zones, usually those "securely" inside the network perimeter, with everything else being untrusted. This approach is shown in the following simple diagram:Sadly, with remote connections, interconnected offices, mobile devices, and cloud resources, the concept of a secure perimeter has gone the way of Reagan, fondly remembered, but no longer with us. This has not kept much of the business world from sticking with it, however.A few years ago, Forrester Research, working for the National Institute of Science and Technology (NIST), proposed a new network security model, called "Zero Trust." "New" is somewhat of a misnomer, as this is just an extension of an approach that has been around for some time, otherwise known as network segmentation. Zero Trust expands a bit on the original network segmentation approach, but the core of the concept is the same.The basic idea is to break a network down into segments, such as LAN, wireless, Web, database, etc. The assumption is that each zone is untrusted, even though it may reside within the walls of the corporate headquarters.The specific design tenets, as defined by Forrester, include:Ensuring that all resources are accessed securely, regardless of location (in other words, the trusted zone is no more).Applying a least privilege strategy, and strictly enforcing access control. In Zero Trust, all users are initially untrusted.Inspecting and logging all traffic. Even traffic originating on the LAN is assumed to be suspicious, and is analyzed and logged just as if it came from the WAN.Supporting monitoring and control from a central console.Full implementation of the Zero Trust model in the enterprise world requires multiple switch stacks connected to a high-speed core to handle the segmentation, often made up of multiple appliances or software packages. This approach is complex and expensive, and thus beyond the current reach of much of the business world.Some have tried to implement this approach using virtual LANs (VLANs), which involve the tagging of traffic to provide for virtual segmentation. Unfortunately, there is no absolute way to prevent a bad actor from ignoring VLAN rules and fully accessing the physical network.I would suggest, however, that a simplified approach to Zero Trust, which for lack of a better term I will call "Zero Trust Lite,"\u00a0 can be implemented within the budget and ability of most of the business world. While the specifics are somewhat different for each network, the general idea is:Define your network segmentsYou need to begin by looking at a list of your data assets, and how your users connect to your network. Certainly, the public Internet will be a zone of its own. Any sensitive assets, such as customer data, PCI or HIPAA-regulated information, etc., would be a good candidate for a zone. Wireless users, given that this network extends beyond your walls, would be a zone by themselves. For many, a single zone for LAN users is appropriate.Dedicate one or more network switches to each of your network segmentsA traditional network has a bank of one or more switches on the inside of a firewall. With a Zero Trust approach, switches must be dedicated to each zone, and outside of the firewall, to avoid mixing of traffic.Use a full-featured firewall at the core connecting all segmentsA commercial-grade firewall will normally have a number of individual ports, each of which can host a zone. To use Zero Trust Lite, you will need as many ports on your firewall as you have zones. It also needs to have a variety of additional features not seen on every firewall, such as deep packet inspection, intrusion prevention, an understanding of applications versus just ports, and some sort of gateway anti-malware ability. Such firewalls are often referred to as "next generation," but that is more of a marketing term. Some examples include Dell SonicWall and Fortinet. As you are setting up your firewall, all zones should by default have no access to any other zone. Access that is specifically needed is added thereafter.Implement tools to insure access control and least privilegeControlling access, and ensuring that users have the least privileges necessary is something we all should already be doing, but I have rarely reviewed an organization that is doing it well. In the recent OPM hack, the perpetrators were using stolen administrative credentials, rendering most other security measures useless. Zero Trust Lite will help prevent this issue, given that, for example, you could prevent an administrative user from network access outside of the LAN zone. You need to go a step further, however, and make sure users have the correct privilege. The challenge here is that you are managing users on a diverse group of systems. In order to do this well, you must employ some automated functionality which allows for control of a single user across multiple platforms. Using LDAP-compliant systems is very helpful with this. I have also found that identity management systems, such as Okta, are of great benefit here.When properly implemented, the Zero Trust Lite approach would look something like this:As you can see, traffic from each zone is isolated from the others, and traffic only flows from one to the other as specifically permitted for a defined purpose. Thus, an intruder penetrating your wireless LAN would be limited to access defined for wireless users. If the rules prevent wireless access to the servers, there would be no danger of a data breach from this zone, even for a user with server admin credentials.While care must be exercised in maintaining firewall rules and sizing network components, Zero Trust Lite can be used successfully by most organizations, and can greatly improve their security. |
Below are the M3AAWG published materials related to our messaging anti-abuse work. There is also a Messaging video playlist on our YouTube channel at www.youtube.com/maawg and there are a few selected videos on our website in the Training Videos and Keynotes Videos sections under the Meetings menu tab.
With the advent of International Domain Names, Internationalized Top-Level Domains and Email Address Internationalization there will be an increase in the legitimate usage of Unicode characters and an increase in the potential for its abuse as well. This document provides best practices to curtail the potential Unicode abuse.
Provides background on the use of Unicode characters in the abuse context with a tutorial on the options to curtail that abuse.
Opportunistic encryption is one step in protecting email traffic between messaging providers but it might not be sufficient unless forward secrecy is also employed for the connection. This document explains why forward secrecy is necessary and provides guidance for implementing it.
Many organizations and individuals register “parked” domains not meant to either send or receive email traffic. Mailbox providers can authenticate incoming email from these domains quite effectively, provided such domains have the necessary identifiers. This best practices document describes what identifiers can be used to indicate a domain or subdomain that is not meant to send or receive emails. The December 2015 version updates some industry links that changed.
Even though opportunistic encryption protects messages during transmission from sender to receiver, it is still possible for a Man-in-the-Middle (MITM) attacker with a self-signed certificate to impersonate the intended destination. This brief document describes the MITM situation, outlines various methods bad actors can use to conduct MITM attacks, covers components for deterring these attacks and introduces DANE (DNS-based Authentication of Named Entities), a new technology to assist messaging providers in validating they are communicating with an intended destination when using SSL/TLS.
Public Policy Comments
MAAWG submitted comments in September 2011
The comments were submitted to the National Institute of Standards and Technologyon its draft NICE plan.
MAAWG submitted a response in September 2011 to the Science and Technology Committee, UK House of Commons
The committee's inquiry covered a variety of questions related to malware and cyber-crime.
MAAWG Response to U.S. Department of Commerce’s Internet Policy Task Force on the Global Free Flow of Information on the Internet
MAAWG comments were submitted November 2010 in response to the DoC request.
The U.S. Department of Commerce’s Internet Policy Task Force requested comments on government policies that restrict Internet information flow, seeking to understand why these restrictions have been instituted; what, if any, impact they have, and how to address negative impacts. The DoC will publish a report contributing to the Administration’s domestic policy and international engagement on these issues.
MAAWG Comments on ICANN Study on the Prevalence of Domain Names Registered Using a Privacy or Proxy Registration Service
MAAWG comments were submitted October 2010 based on the ICANN request.
ICANN conducted an exploratory study in 2009 to assess an approximate percentage of domain names (through a statistical sampling plan) contained in the top 5 gTLD registries that used privacy or proxy registration services. The study indicated that at least 18% (and probably not much more than 20%) of the domain names contained in the top 5 gTLD registries used privacy or proxy registration services.
The MAAWG letter supporting elements of FISA (see www2.parl.gc.ca/Sites/LOP/LEGISINFO/index.asp?Language=E&list=agenda) was submitted September 2010.
MAAWG submitted a letter supporting the global sharing of abuse-fighting information between law enforcement that is included in Canadian Bill C-28 establishing the federal Fighting Internet and Wireless Spam Act (“FISA”).
MAAWG Offers Free Video Training on IPv6 for Senders; Prepares Marketers for Transition to Updated Protocol
Incoming State Attorneys General Association President McKenna and FTC Consumer Protection Director Vladeck To Address Online Protection at MAAWG; Global Gathering Tackles Cybersecurity Policy, Technology, Mobile and Social Platforms
MAAWG Develops First Industry Best Practices for Protecting Web Messaging Consumers; Also Issues Practices for Email Complaint Feedback Loops and Evaluating Anti-Abuse Products for Email Operators
Facebook and Tata Communications Join MAAWG Board of Directors; Will Fight Spam and Online Abuse with Global Industry Organization
Articles About M3AAWG
ProPublica's Julia Angwin augments her earlier "list bomb" article with information on what can be done to prevent these attacks.
ProPublica journalist Julia Angwin describes how she and colleages were "list bombed" and talks about the growing problem, including a preventive strategy developed by M3AAWG. |
Mitre ATT&CK updated version includes a new layer of abstraction: sub-techniques
Corporate hacking attacks and data breaches are rising rapidly so many organizations are increasingly adopting MITRE ATT&CK as a foundational element to their security programs. However, over the years many top security researchers had felt that MITRE ATT&CK had unevenness of abstractions. To counter this, the dev team has released the new version of the MITRE ATT&CK v7 knowledge base. The new knowledge base has many new sub-techniques like “Techniques”, “Groups” and “Software” for both ATT&CK for Enterprise and ATT&CK for Mobile.
For those who are not from the IT security sector, the Mitre ATT&CK framework is a comprehensive matrix of tactics and techniques used by threat hunters, red teamers, and defenders to better classify attacks and assess an organization’s risk. The aim of the MITRE ATT&CK is to give enterprises an instant snapshot illustrating the actions the hacker or cybercriminal may have taken.
Mitre ATT&CK gives a quick knowledge base of how did the attackers get in or how are they moving around in the enterprise network The knowledge base is designed to help answer those questions while contributing to the awareness of an organization’s security posture at the perimeter and beyond. Organizations use the MITRE ATT&CK framework to identify holes in defenses and prioritize them based on risk.
Over the years many security researchers have suggested that Mitre should widen the taxonomy to include sub-heads. With enterprise attack techniques growing, ATT&CK had to be updated to keep up with growing corporate security needs.
The MITRE ATT&CK v7 enterprise version contains sub-techniques that attackers could use. The MITRE ATT&CK v7 is available on the MITRE website, via ATT&CK Navigator , as STIX or download from the TAXII server.
If you want the MITRE ATT&CK v6, you can get it here. |
I'm working on an assignment for class in which I have to define rules in a firewall configuration. One of the requirements is to allow users on the internal network to be able to "browse the web". Would I need to limit what ports they can access like HTTP or HTTPS or is this usually left wide open?
All outbound traffic, i.e, traffic originating from a higher security-level interface destined to a lower security-level interface, is left wide open. However, if required, you can limit it to only web access. For that you can apply a access-list on the inside interface and only open following ports-
53 (udp) - for DNS
80 (tcp) - for HTTP
443 (tcp) - for HTTPS |
While the IC’s research organization looks into adding security to cloud environments, in the here and now, intelligence agencies are sharing more data.
Following a series of high-profile attacks against its networks, the federal government has worked to bolster its cybersecurity defenses. Perhaps most notably, the Department of Homeland Security has upgraded its EINSTEIN program, which adds endpoint security and protects notebooks, tablets and phones connected to government networks. But EINSTEIN is not yet available to all contractors and is mostly a traffic signature–based program.
Enter Symantec’s Advanced Threat Protection: Network. This program is an innovative defense tool that makes use of machine learning to quickly detect and remedy threats against endpoints. Symantec has deployed over 175 million endpoint agents around the world, with each reporting the suspicious behaviors and possible threats it encounters back to headquarters.
For our testing, we set up a network of virtual machines to simulate a production environment of Windows, Mac and Linux clients, all of which the program supports. By deploying one agent, and by taking advantage of wizard-enabled machine learning, the program can exploit endpoint detection and response technology and anti-malware for each client.
The protection proved to be lightweight, allowing quick scans from the central console; each agent continued to work whether or not an operator watched. Once deployed, I used the dashboard to view a list of threats lurking in our test network.
Machine learning helped eliminate false positives. The program caught all the malware I injected into the endpoints, including more advanced stealth tools. In addition, the software showed its work, highlighting all files used in an attack, as well as email addresses, lateral movement and malicious IP addresses involved from the outside. The software is easy to use. I could remove suspect files and block threat venues with a single click. The product also integrates seamlessly with software from analytics provider Splunk and cloud company ServiceNow. Federal agencies that require multiple layers of defense could shore up their defenses in a relatively painless deployment using Symantec’s Advanced Threat Protection.
One major problem with relying on a single cybersecurity vendor is that when an attacker learns there is just one obstacle to overcome, it can work to circumvent that particular protection. Layering multiple programs within a network solves the problem of keeping all of your eggs in one basket, but the implementation often proves more difficult than it seems.
Agents, those tiny little programs that enable the main consoles to function, often behave like malware, silently reporting what they find back to a central server. That can trigger other security programs to flag them as malware, starting a tit-for-tat war, where the only winners are attackers who can make use of that chaos. As such, the new trend is for cybersecurity programs to try to get along.
Symantec’s Advanced Threat Protection is designed to deploy agents and provide its protection without interfering with any existing defenses. We tested that by deploying McAfee endpoint protection and Malwarebytes anti-malware on several virtual clients in our test bed before adding the Symantec protection agents. Surprisingly, the presence of multiple defenses did not immediately trigger a storm of false positives or internal warfare. But the true test came when we injected advanced malware into multiple protected endpoints.
The only setback occurred because the programs seemed to compete to identify and remediate the threat first. The Symantec program, which won that three-way skirmish about 50 percent of the time, would identify the threat and report it to the main console as normal — if it grabbed the intruder first. When another program found and quarantined or eliminated the threat, the main Symantec console would not generate a report because the agent technically never saw it.
That scenario probably won’t prove to be a problem for most admins, but, if that’s a concern, they should check program logs from time to time in order to see which low-level threats were automatically eliminated and by which protection.
The good news for all admins is that today’s cybersecurity programs, including Symantec’s Advanced Threat Protection, seem to be more willing to work together than in the past. Healthy competition exists, but the benefit goes to the networks being protected, not some outside threat looking to take advantage of internal fighting. |
Wed Jan 27 CST (9 months ago)
In your timezone (EDT): Wed Jan 27 12:00pm - Wed Jan 27 1:00pm
Non-profit organizations often underestimate their risk of cyberattacks, leaving them more vulnerable than they may realize. Yet failing to prioritize cybersecurity could have major effects, like data losses, financial catastrophe, damaged reputation and future donor support - or even the need to shut down the organization altogether.
The speakers will discuss the current state of cybersecurity for non-profit organizations and how a Security Information and Event Management (SIEM) approach can help NPOs protect against sophisticated attacks, specifically looking at Microsoft’s Azure Sentinel solution.
In this webinar, we will:
• Understand how non-profit organizations are at risk
• Explore the tools NPOs are leveraging to protect their data, including Microsoft’s Azure Sentinel
• Learn what steps to take to adopt a SIEM strategy to investigate and hunt for suspicious activities within your organization
Principal, BDO Digital, LLC
Cloud Security and Identity Management Specialist, Microsoft
• BDO USA LLP |
Distinguishing legitimate software from malicious software is a problem that requires a lot of expertise. In order to create a malware detection software, an approach consists in extracting System Call Dependency Graphs (SCDG) which summarize the behavior of a software. Once SCDGs are extracted, a learning phase identifies sub-graphs characteristic of malicious behaviors. In order to classify the graph of an unknown binary, we look whether it contains such sub-graphs. These techniques proved to be efficient, but no analysis of the sub-graphs extracted during the learning phase has been conducted so far. We study the sub-graphs we find and showcase preprocessing steps on the graphs in order to improve the learning and classification performance. The approach has been applied on graphs extracted from Mirai. Mirai is a malware which created a large botnet in 2016 to perform distributed deny of service attacks. We show that the preprocessing step tremendously improve the speed of the learning and classification. |
MetaMask is a popular browser extension and cryptocurrency wallet that allows users to interact with decentralized applications (Dapps) on the Ethereum blockchain. This innovative tool has gained significant attention due to its user-friendly interface and enhanced security features. In this article, we will delve into the intricacies of MetaMask’s security protocol and explore how it safeguards users’ funds and data.
One of the primary reasons behind MetaMask’s robust security is its clever use of cryptographic protocols. By utilizing cryptographic techniques such as public-private key pairs and digital signatures, MetaMask ensures that only authorized users can access and manage their digital assets. This means that even if a user’s computer or browser is compromised, their private keys remain safe and secure.
Another important aspect of MetaMask’s security is its commitment to protecting users’ privacy. MetaMask uses a technique called “sandboxing” to isolate user data and prevent unauthorized access. This means that even if a malicious Dapp tries to access sensitive information, it will be unable to do so, as MetaMask keeps each Dapp’s data in a separate, isolated environment.
Furthermore, MetaMask employs strong encryption algorithms to ensure the confidentiality of users’ transactions. This means that every transaction sent through MetaMask is encrypted and can only be deciphered by the intended recipient. This adds an extra layer of protection and prevents any unauthorized parties from intercepting and accessing the contents of the transaction.
Overall, MetaMask’s security protocol is a testament to the importance of prioritizing user safety in the world of blockchain technology. By employing advanced cryptographic techniques, sandboxing, and encryption, MetaMask ensures that users can confidently engage with the Ethereum ecosystem without compromising their funds or privacy.
Exploring the Mechanisms of MetaMask’s Secure Protocol
MetaMask’s secure protocol is an essential component of its reputation as a reliable cryptocurrency wallet. Understanding the mechanisms behind its security features can provide insights into how it safeguards user data and transactions.
1. Cryptographic Key Management
MetaMask uses cryptographic key management to ensure the secure storage and usage of private keys. Private keys are generated and stored locally on a user’s device, providing full control over their funds. The keys are encrypted using a user-defined password, offering an additional layer of protection against unauthorized access.
2. Browser Extension Security
MetaMask is designed as a browser extension, utilizing the security mechanisms provided by the browser itself. These mechanisms ensure that MetaMask’s code is isolated from the websites it interacts with, preventing malicious websites from tampering with the extension or gaining access to sensitive information.
3. Secure Communication
MetaMask uses HTTPS to establish secure communication channels between the user’s device and the Ethereum network. By encrypting data in transit, MetaMask prevents unauthorized parties from intercepting and modifying the exchanged information. This secure communication ensures that transactions and sensitive data remain confidential.
4. Seamless Integration with Decentralized Applications (DApps)
MetaMask allows users to interact with various decentralized applications (DApps) seamlessly. Its secure protocol ensures that users can safely connect and interact with DApps without compromising their private keys or exposing sensitive information. MetaMask prompts users to review and approve transactions, providing an additional layer of protection against unauthorized activities.
MetaMask’s secure protocol combines cryptographic key management, browser extension security, secure communication, and seamless DApp integration to provide users with a reliable and secure cryptocurrency wallet experience. By delving into the mechanisms of MetaMask’s secure protocol, users can gain a deeper understanding of the measures that safeguard their data and transactions.
Understanding the Inner Workings of MetaMask’s Security Features
MetaMask is a browser extension that allows users to interact with the Ethereum blockchain. As a gateway to decentralized applications, it is crucial for MetaMask to prioritize security and protect users’ funds and personal information.
Encryption and Key Management
One of the core security features of MetaMask is encryption. When a user creates a MetaMask wallet, a unique cryptographic key pair is generated. The private key, which is securely stored on the user’s device using their browser’s storage mechanism, is used to sign transactions and grants access to the user’s funds. The public key is derived from the private key and is used to verify the authenticity of messages.
In addition to encryption, MetaMask also implements key management techniques to enhance security. Users can create multiple accounts within their MetaMask wallet, allowing them to manage different sets of keys for different purposes. This can be particularly useful for separating personal and business funds or organizing funds for different projects.
MetaMask employs secure communication protocols to ensure that users’ transactions and interactions with decentralized applications are protected. When a user sends a transaction or interacts with a dApp, MetaMask establishes a secure connection with the Ethereum network using Transport Layer Security (TLS) encryption. This prevents unauthorized parties from intercepting or tampering with sensitive information such as transaction details or wallet addresses.
In addition to TLS encryption, MetaMask also enforces secure communication between the browser extension and the web pages it interacts with. This is achieved through the use of iframes and the browser’s Content Security Policy (CSP) mechanism, which prevents malicious scripts from accessing sensitive information or manipulating the MetaMask interface.
Protection Against Phishing Attacks
Phishing attacks are a common security threat in the cryptocurrency space. MetaMask employs various measures to protect users from falling victim to phishing attempts. One such measure is the detection of malicious websites that attempt to mimic the MetaMask interface. When a user accesses a suspicious website, MetaMask displays a warning message, alerting the user to the potential phishing attempt.
Furthermore, MetaMask also provides users with the option to manually verify the authenticity of a website by comparing the SSL certificate information. This additional step adds an extra layer of security, ensuring that users are interacting with genuine and trusted websites.
In conclusion, MetaMask’s security features encompass encryption and key management, secure communication protocols, and protection against phishing attacks. These features work together to safeguard users’ funds, personal information, and interactions with the Ethereum blockchain.
Diving into the Technology behind MetaMask’s Privacy Measures
MetaMask is a widely-used Ethereum wallet and decentralized application (dApp) browser extension that provides users with the ability to manage their digital assets and interact with the blockchain securely. One crucial aspect of MetaMask’s offering is its strong privacy measures, ensuring that user data and transactions remain secure and confidential.
MetaMask employs advanced encryption algorithms to protect user data and transactions from unauthorized access. All sensitive information, such as private keys and account data, is encrypted using industry-standard encryption protocols. This ensures that even if someone gains access to a user’s device, they won’t be able to decrypt and misuse the data.
2. Secure Communication
MetaMask uses secure communication protocols to establish a connection between the user’s device and the Ethereum network. All data transmitted between the two endpoints is encrypted during transit to prevent eavesdropping and tampering. This ensures that user transactions and interactions with dApps cannot be intercepted or modified by malicious actors.
|Advanced encryption algorithms
|2. Secure Communication
|Secure communication protocols
By leveraging these technologies, MetaMask ensures that users can enjoy a high level of privacy while using their Ethereum wallets and engaging with dApps. It gives users peace of mind knowing that their sensitive information and transactions are safeguarded from potential threats in the digital landscape.
Examining the Encryption Methods Implemented in MetaMask’s Protocol
MetaMask, the popular Ethereum wallet and browser extension, employs advanced encryption methods to ensure the security and privacy of user data. By examining the encryption methods implemented in MetaMask’s protocol, we can gain a deeper understanding of how this wallet safeguards users’ transactions and private keys.
One of the main encryption techniques utilized by MetaMask is asymmetric encryption. This method involves the use of two keys: a public key and a private key. The public key, as the name suggests, is shared with other participants in the network, while the private key is kept secret and used for decrypting the data. This ensures that only authorized parties can access and read the encrypted information.
In addition to asymmetric encryption, MetaMask also employs symmetric encryption algorithms. Symmetric encryption relies on a single key, known as the secret key, which is used for both encryption and decryption processes. The use of symmetric encryption allows for faster data processing, making it suitable for encrypting large amounts of data.
MetaMask’s protocol also incorporates hashing algorithms, which are instrumental in protecting the integrity of data. Hash functions convert any input into a fixed-length string of characters, known as a hash value. Even a slight change in the original input will produce a significantly different hash value. By comparing hash values, MetaMask can verify the integrity of data and detect any tampering or alterations.
Furthermore, MetaMask leverages secure key management techniques to protect users’ private keys. Private keys are stored in encrypted form on the user’s device, making it extremely difficult for hackers to gain access to them. Additionally, MetaMask implements secure password handling practices, such as salted hashing, to further ensure the security of user accounts and prevent unauthorized access.
Overall, through the meticulous implementation of asymmetric and symmetric encryption methods, hashing algorithms, and secure key management techniques, MetaMask’s protocol ensures that user data remains protected and confidential. By understanding these encryption methods, users can have greater confidence in the security of their transactions and private keys while using the MetaMask wallet.
Unveiling the Key Components of MetaMask’s Robust Security Framework
MetaMask, the popular browser extension for accessing Ethereum-based decentralized applications (dApps), has implemented a robust security framework to ensure the safety and protection of its users’ digital assets. This framework consists of several key components that work together to create a secure and trustworthy environment for interacting with the blockchain.
One of the main components of MetaMask’s security framework is its encrypted wallet. When a user creates a MetaMask wallet, a unique encryption key is generated to encrypt and decrypt the wallet’s private keys. This encryption key is securely stored on the user’s device, ensuring that only the user has access to their private keys and can perform transactions using their digital assets.
Another important component of MetaMask’s security framework is its phishing detection mechanism. MetaMask employs various phishing detection techniques to identify and block malicious websites that attempt to steal users’ private keys or sensitive information. This mechanism warns users when they are navigating to a potentially dangerous website, helping them to avoid falling victim to phishing attacks.
MetaMask also incorporates a secure transaction signing process as a key component of its security framework. When a user initiates a transaction, MetaMask verifies the transaction details, including the recipient address and the amount being sent, before signing the transaction with the user’s private key. This ensures that the transaction is legitimate and prevents unauthorized transactions from being executed.
Additionally, MetaMask’s security framework includes a network connection security layer. MetaMask connects to the Ethereum network using secure communication protocols, such as SSL/TLS, to encrypt data transmission and protect against eavesdropping or tampering. This network connection security layer ensures that users can securely interact with the blockchain without their data being compromised.
Furthermore, MetaMask incorporates continuous security updates and improvements as part of its security framework. The MetaMask development team actively monitors and addresses any identified security vulnerabilities, releasing regular updates to enhance the security of the extension. Users are encouraged to keep their MetaMask extension up to date to benefit from the latest security enhancements.
Overall, MetaMask’s robust security framework comprises encrypted wallets, phishing detection mechanisms, secure transaction signing processes, network connection security, and continuous security updates. Together, these components create a secure and trusted environment for users to manage and interact with their digital assets on the Ethereum blockchain.
What is MetaMask’s security protocol?
MetaMask’s security protocol is a set of measures and practices implemented within the MetaMask wallet to ensure the safety and protection of users’ funds and personal information.
How does MetaMask ensure the security of users’ funds and personal information?
MetaMask ensures the security of users’ funds and personal information through several means. First, it uses strong encryption algorithms to protect sensitive data. Second, it implements a secure login process that requires a password or biometric authentication. Third, it provides users with a secure vault that stores private keys locally on their device. Fourth, it warns users about potential phishing attacks and malicious websites. Fifth, it undergoes regular security audits and updates to address new threats and vulnerabilities. |
Definition for : Listing
Listing is the process and rules to be complied with if a Security is to be traded on an exchange. See also Listed security.
(See Chapter 40 Setting up a company or financing start-ups of the Vernimmen)
To know more about it, look at what we have already written on this subject |
As mobile devices have become more popular, many organizations are publishing dedicated mobile apps (native applications for phones and tablets). While these apps create many opportunities for the organization—such as more revenue, a more customized user experience, and so on—they also create more opportunities for hackers.
Once a mobile app achieves a significant user base, malicious actors have a variety of potential threat vectors to exploit. Broadly speaking, there are three categories of these:
- Client-device exploits: threat vectors made possible when the client device is compromised.
- Network-level attacks: malicious activities targeting the communication between the device and backend endpoint.
- API endpoint attacks: hostile actions taken directly against the backend endpoint that mobile devices communicate with.
In this three-article series, we’ll examine each of these categories. We’ll discuss the various vulnerabilities contained in each one, and the strategies and tools to defend against them.
Here in Part 1, we’ll discuss threats against client devices.
What Are Device-Level Attacks?
Device-level attacks are hostile actions directed against physical devices connected to the Internet. In this article, we’re focusing specifically on mobile devices such as phones and tablets.
When a client device is running your mobile app, hackers can attempt different types of attacks depending on their goals. In some cases, they will attempt to gain access to your network. In others, their target is sensitive information stored within the device itself.
Here are several types of vulnerabilities they can try to exploit.
Improper Platform Usage
Such vulnerabilities commonly arise when entities fail to use certain security controls (such as API security), or misuse certain features that are relied upon by the application.
Most modern mobile development frameworks are bundled with security features and best practices that developers should use in their applications. Sometimes developers choose to ignore or misconfigure these features, which can result in vulnerable entry points. Common security features that are recommended for mobile development include platform permissions, Android intents, the iOS Keychain, and touch/faceID.
The technical and business impacts of improper platform usage are severe since hackers can escalate such vulnerabilities to penetrate deeper into the system, and attempt stack-wide attacks.
Threat actors are often keenly interested in the structure and operation of the apps they attack. The richest source of insights is the source code of the app itself. Although the original code is usually not available, hackers can still reverse-engineer it with disassemblers, decompilers, and other sophisticated tools. Such scanning tools also help attackers identify the nomenclature of methods and classes used in developing the application, allowing them to clone, inject, or extract sensitive information from the code.
With this accomplished, hackers can create replica applications, or modify the original binaries. In either case, the application stack’s integrity is compromised.
Insecure Data Storage
Most developers tend to assume that enabling client-side storage of files is relatively safe because it restricts other entities from accessing data. For mobile devices, this is not true; sometimes, attackers can physically obtain the device and circumvent security protections. The device can further be exploited with tools that can extract usernames, passwords, authorization tokens, cookies, location data, credit card data, and private API calls.
Access to such sensitive data is obviously a windfall for attackers, making them highly motivated to obtain it.
Security Decisions via Untrusted Inputs
Certain applications use the existence or values of an input to implement a protection mechanism. For example, hidden form fields, environmental variables, and cookies are often used for this purpose. Worse, they are often used despite being untrusted, and lacking sufficient integrity checking or encryption to validate them.
Attackers who notice that this is occurring can take advantage of it. For example, they might modify untrusted input variables to bypass security checks such as authentication and authorization. The impact of this can vary depending on the system affected; it can lead to sensitive data exposure, execution of arbitrary code, or denial of service.
Rooted Android or Jailbroken iOS Devices
Mobile devices with modified operating systems are susceptible to being exploited. Sometimes, users will alter root access settings to enable an OS modification or run certain specific actions. These configuration changes can allow certain processes to assume absolute system control. Attack vectors leverage such compromised processes to install corrupted applications and escalate privileges for deeper penetration.
With rooted/jailbroken devices, attackers can often bypass security mechanisms. These could allow them to wage attacks such as:
- Code injection using compromised devices to target backends/APIs
- Installing spyware on client devices to try and gain access to application servers
- Gaining access to encryption keys
Often during coding, developers include hidden capabilities that are intended for the development/testing phases, and should be discarded for live instances. If these modules are included in production environments, hackers could use them in unintended ways.
The severity of these attacks depends on the capabilities that were inadvertently included. Unfortunately, because these features are usually intended to facilitate development, they tend to be quite powerful, such as having high inherent levels of permissions/privileges, or even the ability to bypass authentication and authorization mechanisms completely. Therefore, these attacks can be very damaging, potentially exposing sensitive processes and data.
Best Practices to Defend Against Device-Level Attacks
While recommended practices vary for different use cases, here are some commonly recommended practices to reduce the risk of attacks targeting your user’s mobile devices.
Enable Strong Caching Mechanisms
Although caching simplifies the user experience, it is known to expose sensitive data to malicious actors. When developing an application, developers should document the way the OS caches logs, buffers, and media. It is also recommended that you document the caching behavior of your development, social, ad, and analytic frameworks. Security teams should perform simulated tests to discern how the OS and frameworks handle cached data by default, then apply mitigation controls for sensitive data exposed in the cache.
Publish Updates as Needed
Mobile frameworks offer the ability to issue patches and updated versions to users. It can be tempting to focus on enhancing features and functionality, while paying less attention to security issues. Developers should resist this tendency, and diligently monitor changes in the threat landscape. As new security issues arise, updates and patches should be rolled out to mitigate emerging vulnerabilities.
Rotate Session Cookies
Stored cookies can be used to wage session attacks, where the attacker accesses the user’s previous session and assumes their identity. Developers should implement non-persistent cookies and other techniques that will invalidate user sessions and prevent session hijacking.
Enable Server-Side Validation for Session Tokens
Since tokens don’t store session-based data that hackers can manipulate, they prevent session fixation and CSRF attacks on mobile endpoints. Tokens stored on the client side can be used across multiple servers to perform authentication for a combination of services. As a best practice, access tokens should only be validated by the server/API for which they are intended.
Runtime Protection Measures
Actors with control of the client device can access the application’s binaries and modify them, compromising the application stack’s integrity. Developers should take steps to prevent reverse engineering and code tampering with techniques such as code obfuscation and binary protections. Some obfuscation techniques include altering code structure, removing metadata, encrypting data structures, and transforming logical expressions.
Although it is impossible to fully prevent threat actors from attempting to reverse engineer or modify the application’s binaries, it is possible to hinder their efforts and make it more difficult for them to succeed.
Implement Appropriate Input Validation
Applications are built to accept input from users, so it is important to thoroughly test entered inputs to ensure they do not contain code or other content that can affect the application’s response. Validation can help to ensure that the application only accepts predefined formats through input forms and related fields.
Perform Periodic Manual Code Reviews
Although writing secure code is paramount, administering security is a continuous process. It is critical to regularly audit the application’s components and code to ensure security best practices, compliance, and frameworks are being followed. While automation helps rapidly test flaws, manual audits can expose vulnerabilities that were either missed or incorrectly assessed by automated scanning tools. Manual reviews also enable testers to closely examine data paths within the application through which unvalidated input is entered.
Tools and Techniques to Prevent Device-Level Attacks
Mobile applications should include up-to-date access control mechanisms to safely determine who gets to view, modify, or delete application services. As the first line of defense, access control measures include multi-factor authentication, enforcement of strong passwords, and biometric logins.
Applications are sometimes published with vulnerabilities and security gaps that were overlooked during development. Application security testing tools can prevent this by automatically scanning the application code and configuration environment for security weaknesses and vulnerabilities. These include static and dynamic testing (SAST, DAST) tools, runtime application self-protection (RASP) tools, and software composition analysis (SCA) tools. Manual code reviews can also help avoid the creation of vulnerabilities in the first place, and periodic audits of application code and components can help to ensure that security best practices, compliance, and frameworks are being followed.
IPS/IDS (Intrusion Prevention and Detection Systems)
As a final (but crucially important) line of defense, intrusion detection and prevention systems can intercept attacks and block abusive traffic coming from compromised client devices. For mobile app traffic, this is usually a WAF or cloud WAF system that includes API security. A robust cloud security solution can process incoming requests, block attacks while displaying traffic data in real time, analyze user behavior to improve threat detection, adapt to new threats with machine learning, and so on.
With a swift rise in mobile computing, the challenges around security cannot be ignored. As most users carry sensitive data on their devices, malicious actors have created mechanisms to exploit vulnerabilities and compromise these systems in order to exfiltrate data.
In this article, we discussed mobile apps, and how organizations which publish them must assume that their apps will sometimes be used on compromised devices. This means that appropriate steps must be taken to ensure that hackers can’t leverage these situations to cause additional harm.
In Part 2 of this series, we’ll discuss how attackers can target networks and communications between client devices and the backend, and some best practices for securing potential vulnerabilities. |
Simply put, the robots exclusion standard (also called the robots exclusion protocol or robots.txt protocol) is a easy way of telling Web crawlers and other Web robots what parts of a Web site they can and can not view.
To give robots instructions about what part of your site they can access, you can put a text (.txt) file called robots.txt in the main directory of their Web site, e.g. https://owlman.neocities.org/robots.txt. This file tells robots what part of your site they can view, however, some robots can ignore such files, especially malicious (or bad) robots.
If the robots.txt file does not exist, Web robots assume that they can see all parts of your site.
An example of a good robot (and a good boy).
\ oo \____|\mm //_//\ \_\ /K-9/ \/_/ /___/_____\ -----------
Here are some useful links on robots.txt that may help you. |
NetworkAcl / Attribute / entries
The entries (rules) in the network ACL.
Describes an entry in a network ACL.
CidrBlock (string) –
The IPv4 network range to allow or deny, in CIDR notation.
Egress (boolean) –
Indicates whether the rule is an egress rule (applied to traffic leaving the subnet).
IcmpTypeCode (dict) –
ICMP protocol: The ICMP type and code.
Code (integer) –
The ICMP code. A value of -1 means all codes for the specified ICMP type.
Type (integer) –
The ICMP type. A value of -1 means all types.
Ipv6CidrBlock (string) –
The IPv6 network range to allow or deny, in CIDR notation.
PortRange (dict) –
TCP or UDP protocols: The range of ports the rule applies to.
From (integer) –
The first port in the range.
To (integer) –
The last port in the range.
Protocol (string) –
The protocol number. A value of “-1” means all protocols.
RuleAction (string) –
Indicates whether to allow or deny the traffic that matches the rule.
RuleNumber (integer) –
The rule number for the entry. ACL entries are processed in ascending order by rule number. |
Signature Based Intrusion Detection System Using SNORT
- Vinod Kumar
- International Journal of Computer Applications and Information Technology ico_openaccess
- 공학 > 컴퓨터공학
- Intrusion Detection System, Snort, BASE, TCP Replay., Electronic computers. Computer science, QA75.5-76.95, Instruments and machines, QA71-90, Mathematics, QA1-939, Science, DOAJ:Computer Science, DOAJ:Technology and Engineering, Information technology, T58.5-58.64, Industrial engineering. Management engineering, T55.4-60.8, Technology (General), T1-995, Technology, T
Now a day’s Intrusion Detection systems plays very important role in Network security. As the use of internet is growing rapidly the possibility of attack is also increasing in that ratio. People are using signature based IDS’s. Snort is mostly used signature based IDS because of it is open source software. World widely it is used in intrusion detection and prevention domain. Basic analysis and security engine (BASE) is also used to see the alerts generated by Snort. In the paper we have implementation the signature based intrusion detection using Snort. Our work will help to novel user to understand the concept of Snort based IDS.
조회 가능한 데이터가 없습니다.
해당 참고문헌을 고객센터를 통해 등록해주시면
빠른 시간안에 노출하겠습니다. |
In the past week, we have seen emails being sent direct to end-users with detailed information about them. The image below only shows this persons first name, but we can confirm the first name and surname are correct along with his home address. We’ve blocked out details, but can you spot the mistakes that highlight this email to be malicious?
How many spelling mistakes can you spot? The grammar is also another giveaway warning.
- DO NOT open the file!
- DO NOT reply to the sender!
- DELETE the email immediately!
We’ve taken a short video of ourselves opening the attached file which after entering the Password that’s in the email, it asks you to Enable Macros. When you click Enable it will immediately look for internet connection which tells us that it’s looking to obtain an encryption key. We removed the desktop in the video from any network so it didn’t have internet connectivity.
As the document is blank, but we noticed the cursor is now flashing in the middle of the row instead of the far-left hand side we know there is something hidden within the file. Turning on the development tools we could see the text coding for the macro that they want you to run.
The code clearly shows to us that the first task is to “check the SSL certification” Whilst this is showing the message another instruction is telling the device to open hidden connections to two predetermined URL’s. Once these URL’s have been visited then a small malicious file is installed onto your device and the results can be very disruptive to this device and any mapped drives it’s connected to.
If you receive a suspicious attachment on an email that you are not expecting (even if it’s from somebody you know) DO NOT open it. Call the sender to confirm they did send it. If you cannot confirm, delete it! |
Could it be that one of the most common credos taught to security professionals is actually leading them astray?
Every practitioner has heard it before: Trust that employees are doing the right thing, but verify that data is protected. Proponents of a new security model, however, argue that while the phrase “trust, but verify” sounds good in theory, the reality is that most security practitioners have been doing the opposite – trusting users by default, but never verifying that data is protected.
“Whoever said, ‘This needs to become a mantra,' missed the mark,” says John Kindervag, a senior analyst at Forrester Research. “It incentivices people to not know what's going on. There is no reason to have any trust in the network.” Kindervag is the driving force behind a new model called “zero-trust” that is gaining support with the security community.
The strategy is based on the idea that security must be made ubiquitous throughout the network, not just at the perimeter. No longer should there be any distinction between a trusted internal network and the untrusted external network. The zero-trust model dictates that all network traffic should be untrusted.
The idea was born to solve a fundamental security problem: Once an attacker penetrates a network, they have unfettered access to the resources inside, Kindervag says. Plus, malicious insiders don't even need to break into the network to abuse its resources.
Consider this: 49 percent of breaches investigated in 2009 by Verizon were linked to insiders. This figure dropped to 17 percent for incidents investigated last year, but according to Verizon, the decrease was attributed to a monumental increase in smaller external attacks, rather than a true reduction in insider activity.
For both years, investigators found that the vast majority of internal breaches were the result of intentional malicious activity.
The zero-trust model aims to mitigate internal and external threats through changes in both security philosophy and network architecture. The model has three core concepts, the first of which is to ensure all network assets are accessed securely, which necessitates using encrypted tunnels.
Next, limit and steadfastly enforce access control across the enterprise, which discourages insiders from abusing or misusing network resources. To do so, Forrester recommends using role-based access control (RBAC) products, which assign individuals to a role that determines what they can access.
The third concept is to log and inspect internal and external network traffic. Most organisations already keep logs, but few actually go so far as to inspect them. For this piece, Forrester suggests using traditional security information management systems in conjunction with so-called network analysis and visibility (NAV) solutions, which include tools to analyse flow data, dissect packet captures, inspect network metadata and facilitate network forensic examination. Such tools can provide security practitioners with a better understanding of what is happening on a network and make it easier to monitor applications.
Going beyond the three essential concepts of zero-trust, the model suggests new network architecture designs that focus on data security from inception. Historically, networks have been built from the outside in – starting with the internet connection and moving inward. Security was bolted on, in layers, after initial design.
Today's networks, Kindervag argues, should be built from the inside out, starting with the system resources and data that need to be protected. “Security is so important that we need to invert the way we design networks so we can embed security into the very DNA of the network,” Kindervag says. “That's what zero-trust is all about.”
The model essentially describes how to break up aspects of a network into different enclaves and protect them, says Eddie Schwartz, CSO at network monitoring and analysis firm NetWitness. “Imagine islands of protection versus all-purpose layers that might fail in some way,” he says. Kindervag warns, though, that zero-trust is not about one particular solution, nor is it a one-time project.
In fact, the first and most important step of adopting the model is free: Security practitioners must stop using the word “trust” as it relates to networking and security. Rather, adopt a mindset that the concept of trust is inappropriate with respect to data security, and spread the message to teams throughout the organization.
First introduced before a small audience at an IT forum last May, zero-trust resonated with people, Kindervag says. The model then gained increasing support once introduced to the masses in the September 2010 paper, “No More Chewy Centres”. One such supporter is FCC Group, a Spanish construction and infrastructure company. With 93,000 employees, a footprint in 54 countries and innumerable contractors with access to the company's networks, insider and third-party threats are a major concern, says Gianluca D'Antonio, the company's CISO.
“When I first heard about the zero-trust model, I realised that we had intuitively started adopting a similar approach,” says D'Antonio, who also is a member of Forrester's security and risk leadership board. “Zero-trust helped us plug the holes and complete the architecture around a true data-and-user-centric operation.”
The zero-trust network framework, in which security is embedded into the network – as opposed to added on after design – offers protection from threats and helps isolate and contain damage if an incident arises, D'Antonio says. Moreover, it offers the bonus of easier compliance with security regulations and standards.
Further, zero-trust can help organisations reduce their threat profile by providing a sense of where their most critical data is stored and how it is transacted, says Phil Agcaoili, CISO at Cox Communications, a broadband communications and entertainment company. Today, most organisations are dealing with network proliferation.
The zero-trust model provides tighter control over data and pinpoints where practitioners must pay attention, Agcaoili says. By using virtualisation technologies, for example, it is possible to create an environment where users can work with data, but never truly have access to it on their endpoint. The model expands on ideas that have been around for some time, but until now haven't been developed as part of a working system that scales and is adaptable to real-world situations, FCC Group's D'Antonio says.
The framework actually echoes ideas presented by a series of computer standards developed during the 1980s and 90s by the US Department of Defense. Named the “Rainbow Series,” the standards are designed to build trusted computer systems, says Ken Ammon, chief strategy officer at access control solutions provider Xceedium. The premise behind the now-defunct program was that trust should be built into systems, instead of granted to users.“Zero-trust is, like many things, a new spin on an old story,” Ammon says.
Many forward-thinking organisations within the financial services, energy, high-tech and retail industries have, over the past several years, been instinctively adopting zero-trust properties, such as the pervasive capture and analysis of network traffic, says NetWitness' Schwartz.
Many are also beginning to rearchitect their enterprise networks to focus on protecting data. Agcaoili says members of his security team at Cox have been familiarising themselves with zero-trust and exploring the costs and benefits of implementing its ideas.
He knows of several other well-known organizations that have already adopted the model. “They created zoned environments for the most critical data and provided remote access capability through virtualized desktops,” he says. The FCC Group has already implemented some zero-trust aspects throughout the organisation, focusing on efforts to gain greater control over insiders and contractors, as well as to ensure all resources are accessed securely, D'Antonio says.
The company's security team has already deployed infrastructure monitoring solutions and a data leakage prevention program and is now concentrating on using NAV tools to increase network visibility. Transitioning the entire network to align with zero-trust designs is a long-term goal. “What makes this model outstanding is the ability to adapt to it and incorporate some bit of the model while the rest of your infrastructure still remains untouched,” D'Antonio says. “This way you can start the transition process at areas of high risk and still run your legacy systems and networks in the old fashion way.”
While it has received a swath of support, even many proponents of zero-trust agree that the model requires holistic changes that will not come easy. For starters, changing the way people think about security is never an easy task, D'Antonio says. Members of IT departments are used to internal structures that are shaped toward their needs, not geared toward security. “Changing that culture and finding enough clout within the organization is difficult,” he adds.
And while organisations can embrace portions of zero-trust right away, adopting the full model and replacing legacy infrastructures will take some time. For example, FCC Group has made large investments in its network architectural model and changing it will require funds from more than one department's budget, D'Antonio says.
To begin adopting zero-trust, security practitioners should become familiar with all the model's philosophies and architectural ideas, and then look for subnetworks or lab environments where they can start testing them, Kindervag says. Also, regular meetings with networking counterparts should occur to discuss plans and how they can be applied to the overall network architecture.
NetWitness' Schwartz recommends first applying zero-trust methodologies to the most critical aspects of the network, then have a plan to transition, over the next several years, the rest of the network using a risk-based approach.
- Ensure all resources are accessed securely.
- Limit and enforce access control across the enterprise.
- Log and inspect internal and external network traffic.
- Redesign networks from the inside out.
- Adopt a mindset that trust is inappropriate with respect to network security.
- Spread the message across the organisation.
- Set up meetings with counterparts in networking to discuss how zero-trust can benefit the organisation.
- Look for subnetworks where the model can be tested.
- Begin implementing zero-trust ideas, starting with the most critical parts of the network.
- Ask vendors if and how they support zero-trust principles.
- Create a plan to transition the entire network over the next two to three years. |
Pascal Meunier is covering his VMworld experience, mostly about security topics.
The VIX API on Tuesday morning was a very interesting session. It will
enable the remaining automation functionality of ReAssure. It allows to
automate the powering on and off of virtual machines, the taking of
snapshots, transfering files (e.g., results) between the host and guest
OS, and even starting programs in the guest OS! It was introduced with
VMWare server 1.0 last summer, but I hadn’t noticed. It is still work
in progress though; there’s support only for C, Perl and COM (no
Python, although I was told that there was a source forge project for
There are of course other teaching labs using virtualization that have
been developed at other universities and colleges; the challenge is of
course to be able to design courses and exercises that are portable and
reusable. We can all gain by sharing these, but for that we need a
common infrastructure where all these exercises would be valid.
As a member of the panel argued, virtualization doesn’t make things
better or worse, it still all depends on the practices, processes,
procedures, and policies used in managing the data center and the
various data security and recovery plans. Another pointed out that
people shouldn’t assume that virtual appliances or virtualization
provide security out-of-the-box. Out of all malicious software,
currently 4-5% check if they are running inside a virtual machine; this
may become more common. |
Chapter 5. Logging for Developers
5.1.1. About Logging
Logging is the practice of recording a series of messages from an application that provide a record (or log) of the application's activities.
Log messages provide important information for developers when debugging an application and for system administrators maintaining applications in production.
Most modern logging frameworks in Java also include other details such as the exact time and the origin of the message.
5.1.2. Application Logging Frameworks Supported By JBoss LogManager
JBoss LogManager supports the following logging frameworks:
- JBoss Logging - included with JBoss EAP 6
- Apache Commons Logging - http://commons.apache.org/logging/
- Simple Logging Facade for Java (SLF4J) - http://www.slf4j.org/
- Apache log4j - http://logging.apache.org/log4j/1.2/
- Java SE Logging (java.util.logging) - http://download.oracle.com/javase/6/docs/api/java/util/logging/package-summary.html
JBoss LogManager supports the following APIs:
- JBoss Logging
JBoss LogManager also supports the following SPIs:
- java.util.logging Handler
- Log4j Appender
If you are using the
Log4j API and a
Log4J Appender, then Objects will be converted to
string before being passed.
5.1.3. About Log Levels
Log levels are an ordered set of enumerated values that indicate the nature and severity of a log message. The level of a given log message is specified by the developer using the appropriate methods of their chosen logging framework to send the message.
JBoss EAP 6 supports all the log levels used by the supported application logging frameworks. The most commonly used six log levels are (in order of lowest to highest):
Log levels are used by log categories and handlers to limit the messages they are responsible for. Each log level has an assigned numeric value which indicates its order relative to other log levels. Log categories and handlers are assigned a log level and they only process log messages of that level or higher. For example a log handler with the level of
WARN will only record messages of the levels
5.1.4. Supported Log Levels
Table 5.1. Supported Log Levels
Use for messages that provide detailed information about the running state of an application. Log messages of
TRACE are usually only captured when debugging an application.
Use for messages that indicate the progress individual requests or activities of an application. Log messages of
DEBUG are usually only captured when debugging an application.
Use for messages that indicate the overall progress of the application. Often used for application startup, shutdown and other major lifecycle events.
Use to indicate a situation that is not in error but is not considered ideal. May indicate circumstances that may lead to errors in the future.
Use to indicate an error that has occurred that could prevent the current activity or request from completing but will not prevent the application from running.
Use to indicate events that could cause critical service failure and application shutdown and possibly cause JBoss EAP 6 to shutdown.
5.1.5. Default Log File Locations
These are the log files that get created for the default logging configurations. The default configuration writes the server log files using periodic log handlers
Table 5.2. Default Log File for a standalone server
Server Log. Contains all server log messages, including server startup messages.
Garbage collection log. Contains details of all garbage collection.
Table 5.3. Default Log Files for a managed domain
Host Controller boot log. Contains log messages related to the startup of the host controller.
Process controller boot log. Contains log messages related to the startup of the process controller.
The server log for the named server. Contains all log messages for that server, including server startup messages. |
These commands work on both client and server.
Usage: logfile [filename]
This command will begin writing a log of all console events to an external file.
Note: No file extension will be applied to the log unless done by the user. For instance, if you wanted to log to an output file named "newlog.txt", you would use the command as logfile newlog.txt.
This command will stop logging if there is currently a logfile being written. |
Original Source Here
AI : Future of Cyber Security
Does it strike you that the cybercriminals are outgunning you? You must be right 100% of the time, but the cybercriminals only need to be right once to penetrate your network. So, we need some help, as in Artificial Intelligence.
Manufacturing, Supply Chain, Logistics industries as well as Cybersecurity — Artificial Intelligence (AI) is everywhere.
Wondering what is so special about Artificial Intelligence?
Well, AI is a field that can help bring task automation to a much more optimal & efficient level than any human ever could.
No matter in which sector you are working, you very likely have already been breached. Also, it is clear that you have more data available with you than is manually possible to analyze. So, humans need some help, some artificial help, as in, Artificial Intelligence. By the way, the cybercriminals already have. AI tools are already being used to probe for weaknesses, and AI-driven twitter bots are well known as constant sources of phishing campaigns.
AI has received a lot of hype but one of the areas where AI is already proving useful is cybersecurity. AI tools are helping to detect malware and unauthorized (inappropriate) activity using several different approaches. One approach is to use a branch of AI called Machine Learning that allows machines to learn to recognize good versus bad patterns of behavior. This is often referred to as behavior analysis. The machines establish a baseline and it distinguishes good behaviors from bad by diverging from goodness by a sufficient delta. Another approach is examining attributes of the various binaries, the machine can group files that seem to be similar. In both cases, what makes the AI systems truly useful is their ability to learn the baselines and determine the attributes most useful for clustering on their own.
Having too many security tools to beef up their respective lines of defenses indirectly increases the attack surface for the Attacker. Given the sheer volume of attacks and the number of endpoints, systems, and approved communications channels you are protecting, it’s going to be essential to incorporate products that use AI into your tools portfolio. It does not mean that you should abandon your existing monitoring processes, instead, you enhance the protection of your environment by learning more about how to use AI tools to help protect your organization in the future.
Both Penetration Testing and Threat Hunting are very time-consuming, laborious, and mentally grueling tasks. There are a lot of smaller steps in both of these processes that have to take place, and once again, many of them are repetitive. This is where the tools of AI can come into play.
For example, Pentoma is an AI-powered penetration testing solution that allows software developers to conduct smart hacking attacks and efficiently pinpoint security vulnerabilities in web apps and servers. It identifies holes in web application security before hackers do, helping prevent any potential security damages.
Hunchly is another tool used by DFIR (Digital Forensic and Incident Response) teams for online investigations that automatically collects documents and annotates every web page you visit. Hunchly does capture everything in your browser and tags it to a particular investigation, helping you to save so many efforts.
Another best use of Artificial Intelligence tools is that of filtering for false positives. The Security teams are being totally flooded with warnings and alerts and because of the time it takes to analyze these, many of the real alerts and warnings that come through often remain unnoticed, thus increasing the Risk factor. Instead by using the AI tools, all of these ‘false positives’ are filtered out, thus leaving only the real and legitimate ones that have to be examined and triaged.
Thus, by taking AI mindset, the business will achieve a far greater Return On Investment (ROI), which means that the CIO/CISO, will be in a much better position to get more for their security budgets.
– AI in Cybersecurity by Leslie F. Sikos
– Practical AI for Cybersecurity Ravi Das
– The CISO Handbook
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot |
IX90122: WHEN A VIRTUAL HOSTNAME IS MAPPED TO MULTIPLE IPS, THE ORB MAY S END THE REQUEST TO ONLY ONE OF IP.
Closed as program error.
Error Message: In a scenario where the Remote HostName was mapped to multiple IPs, the ORB was sending requests to only one of IP. . Stack Trace: N/A . N/A
The ORB code was incorrectly routing remote requests to only one IP associated with the Virtual Host Name.
This defect will be fixed in: 5.0.0 SR16 FP4 6.0.0 SR15 6.0.1 SR7 7.0.0 SR6 . The ORB code has been modified to do the hostname to IP resolution before routing remote requests.
Reported component name
Reported component ID
Last modified date
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fixed component name
Fixed component ID
Applicable component levels |
Detecting Malware's Encrypted Network Traffic Using Perlin Noise and Convolutional Neural Network
The detection of malicious network traffic has been the focus of many researchers, especially with the use of machine learning. Unlike traditional signature-based detections, machine learning allows for a behavioral analysis of such traffic packets, increasing the chances of detecting new variants of malware as long as they share the same behavioral model. However, as more of the internet is shifting towards encrypted traffic to preserve the confidentiality and integrity of data, adversaries exploit such cryptographic methods to bypass traditional network detection techniques. In this research, the focus will be on detecting malware’s encrypted network traffic by designing a new method based on Convolutional Neural Networks. Our proposed approach encodes given connection features into images using Perlin noise to train the deep learning model for classification of connection flows as malign or benign. Since the payload is encrypted, we extract contextual features from the connection meta-data that best characterizes the behavior of malign and benign traffics, and then use our new feature augmentation method based on Perlin noise to generate trainable images. We use captured CTU-13 real botnet traffic dataset mixed with normal traffic and background traffic and analyze using CNN trained from the Perlin noised images. Our deep learning model has a high accuracy of 97% and low false negative rate of 0.4% and is compared with different machine learning methods such as SVM, NN, Gaussian Naïve Bayes and Random Forests. |
Skip to Main Content
In this work we consider the problem of pursuit evasion games (PEGs) where a group of pursuers is required to detect, chase and capture a group of evaders with the aid of a sensor network in minimum time. Differently from standards PEGs where the environment and the location of evaders is unknown and a probabilistic map is built based on the pursuer’s onboard sensors, here we consider a scenario where a sensor network, previously deployed in the region of concern, can detect the presence of moving vehicles and can relay this information to the pursuers. Here we propose a general framework for the design of a hierarchical control architecture that exploits the advantages of a sensor network by combining both centralized and decentralized real-time control algorithms. We also propose a coordination scheme for the pursuers to minimize the time-to-capture of all evaders. In particular, we focus on PEGs with sensor networks orbiting in space for artificial space debris detection and removal. |
LibreOffice, a popular open-source office suite application contains a major code execution flaw in the software. The flaw could allow anyone to execute arbitrary Python commands through the application. It could be exploited through a malicious document containing a macro that is opened with LibreOffice. The flaw was discovered by security researcher Nils Emmerich of ERNW.
Emmerich explains that the flaw resulted due to faulty code in LibreLogo.
“To move the turtle, LibreLogo executes custom script code that is internally translated to python code and executed. The big problem here is that the code in not translated well and just supplying python code as the script code often results in the same code after translation,” said Emmerich.
Since the flaw is unpatched, users are recommended to install LibreOffice without macros or exclude installing LibreLogo. |
**Placement group strategy**
**Amazon Web Services (AWS)** provides various services to set up high-performance computing (HPC) environments on the go. EC2 instance types like CPU optimized, GPU optimized, Spot instances/ Spot fleets and auto-scaling helps to form high-performance computing at optimal cost. By controlling the EC2 instances placement, you increase the performance or improve the availability. This HOWTO article will walk you through the different types of AWS EC2 placement groups and how they can be set up in the AWS cloud environment.
AWS EC2 placement group strategies should be considered while designing the low latency applications and critical workloads. It provides customers to choose where they want to place instances based on the criticality of the workloads. Cluster, Spread, and partition placement have their own pros and cons. The architect needs to access the workload before choosing the placement group type. Kindly note that you can’t merge placement groups with each other. An instance can be part of one placement group at a time.
Vembu, as part of its [BDR Suite](https://www.bdrsuite.com/vembu-bdr-suite/) and with its more than a decade of experience in the data protection domain offers a simple, secure, and cloud-native [AWS backup software](https://www.bdrsuite.com/aws-backup/) for your AWS EC2 instances. You can recover your backup instance, volume, or even files at any time with near-zero RTO.
Even if AWS offers manual snapshots and volume backups, you need a complete AWS Data Protection solution for backup, versioning, archiving, retention and recovery to protect your EC2 instances from data threats in the cloud ranging from accidental file deletion to malware. |
Macros are basically functions (and a good way to reuse code). You can input numerous values into the function do this by not including a comma.
The important part of the macro is the first line: #macro is just the command to call a macro, then inside the brackets you must place the name of the macro, in this case the function macro is called contentType. After the macro name you include the values.
The example below picks up the type and then displays an image based on the contentType or $type.
A macro itself in full looks like the following:
#macro (contentType $type) #if($type=="blog") <img src="/application/assets/images/blog-badge-small.png" alt="blog"> #elseif($type=="photo") <img src="/application/assets/images/photo-badge-small.png" alt="photo"> #elseif($type=="video") <img src="/application/assets/images/video-badge-small.png" alt="video"> #elseif($type=="events") <img src="/application/assets/images/event-badge-small.png" alt="event"> #end #end
To call a macro use the following line. The name of the macro is contentType. Place a value or two between the brackets: |
Electronic documents are ubiquitous and essential to all aspects of modern life. Individuals and organizations must routinely engage with electronic documents received from a variety of unauthenticated or potentially compromised sources, comprising a growing variety of electronic data formats. Even if the immediate provider of the data can be authenticated, the data may derive from an untrusted source. We expect pictures, charts, spreadsheets, maps, audio, video, as well as rich messages potentially including any and all of these, to be received with a click of a button, DARPA researchers point out. However, the complexity of managing such electronic data results in software vulnerable to attack. This situation is unsustainable, DARPA experts claim.
On December 23, 2015, the Ukrainian Kyivoblenergo, a regional electricity distribution company, reported service outages to customers. Shortly after the attack, Ukrainian government officials claimed the outages were caused by a cyber attack, and that Russian security services were responsible for the incidents. Following these claims, investigators in Ukraine, as well as private companies and the U.S. government, performed analysis and offered assistance to determine the root cause of the outage.
The study and analysis found that the adversaries weaponized Microsoft Office documents (Excel and Word) by embedding BlackEnergy 3 within the documents. During the cyber intrusion stage of Delivery, Exploit, and Install, the malicious Office documents were delivered via email to individuals in the administrative or IT network of the electricity companies. When these documents were opened, a popup was displayed to users to encourage them to enable the macros in the document. Enabling the macros allowed the malware to Exploit Office macro functionality to install BlackEnergy 3 on the victim system. Upon the Install step, the BlackEnergy 3 malware connected to command and control (C2) IP addresses to enable communication by the adversary with the malware and the infected systems. These pathways allowed the adversary to gather information from the environment and enable access.
Current software that processes electronic data such as documents, messages, and data streams is error-prone and vulnerable to exploitation by malicious inputs. According to MITRE’s Common Vulnerability Enumeration data, over 80% of yearly reported vulnerabilities occur in code that handles input data. Such code converts a given bit stream representing the data into memory objects and validates that these objects have expected structure and relationships.
Exploitation of input-handling vulnerabilities leverages inaccurate programmer assumptions regarding the extent to which input data has been validated by input-handling code. Code that behaves correctly under certain assumptions (and may even be proven correct under these assumptions) will typically not behave correctly if any of these assumptions do not hold. Attackers can induce incorrect behaviors by presenting vulnerable software with maliciously crafted input data that violates unchecked assumptions. The programmer assumes that validated input data contains certain objects in certain relationships, and writes code under these assumptions. However, should any of these assumptions not hold, the code will not behave correctly. A single missing or incorrect check can create a vulnerability, as was the case with the Heartbleed vulnerability (CVE-2014-0160), in which code acting on an unchecked assumption exposed sensitive memory content to remote attackers.
Parsing or checking code itself contains exploitable flaws and behaviors. Such flaws are particularly insidious, as they require little or no human interaction for the attack to succeed or lead to pre-authentication vulnerabilities.
Today, code for input data validation is typically written manually in an ad-hoc manner. For commonly-used electronic data formats, input validation is, at a minimum, a problem of scale whereby specifications of these formats comprise hundreds to thousands of pages. Input validation thus translates to thousands or more conditions to be checked against the input data before the data can be safely processed. Manually writing the code to parse and validate input, and then manually auditing whether that code implements all the necessary checks completely and correctly, does not scale. Moreover, manual parser coding and auditing typically fails even for electronic data formats specifically designed to be easier to perform such tasks, e.g., JSON and XML. A variety of critical vulnerabilities have been found in major parser implementations for these formats.
Widely deployed mitigations against crafted input attacks include (a) trying to prevent the flow of untrusted data to vulnerable software; and (b) testing software with randomized inputs to find and patch flaws that could be triggered by maliciously created inputs. Unfortunately, neither of these approaches offer security assurance guarantees.
Mitigations for preventing the flow of untrusted data to vulnerable software, which can be implemented via network or host-based measures such as firewalls, application proxies, antivirus scanners, etc., neither remove the underlying vulnerability from the target, nor encode complete knowledge of document or message format internals. Attacker bypasses of such mitigations exploit incompleteness of the mitigations’ understanding of the data format to exploit the still-vulnerable targets.
The effectiveness of fuzzing methods for testing of software with randomized inputs to find and fix flaws depends on whether randomly generated inputs can emulate maliciously crafted inputs closely enough to trigger all relevant code flaws. Although modern fuzzing methods incorporate feedback from tracing the execution of the code as it consumes crafted inputs, they also employ symbolic and concolic execution of code in their exploration of the space of potential crafted inputs. As a result, these methods are still essentially heuristic. There is no guarantee that attackers, who also use fuzzing to locate and develop vulnerabilities, will not cover a more substantial and more productive portion of the input space with a different set of heuristics.
DARPA is soliciting innovative research proposals in the area of secure processing of untrusted electronic data. Proposed research should investigate innovative approaches that radically improve software’s ability to recognize and safely reject invalid and maliciously crafted input data, while preserving essential functionality of legacy electronic data formats. Proposals should build on an existing base of knowledge of electronic document, message, and streaming formats and the nature of security vulnerabilities associated with these formats.
Safe Documents (SafeDocs) program
The Safe Documents (SafeDocs) program will develop novel verified programming methodologies for building high assurance parsers for extant electronic data formats, and novel methodologies for comprehending, simplifying, and reducing these formats to their safe, unambiguous, verification-friendly subsets (“safe sub-setting”). SafeDocs will address the ambiguity and complexity obstacles that hinder the application of verified programming posed by extant electronic data formats. SafeDocs’ multi-pronged approach will combine:
- extracting the extant formats’ de facto syntax (including any non-compliant syntax deliberately accepted and substantially used in the wild);
- identifying a syntactically simpler subset of this syntax that yields itself to use in verified programming while preserving the format’s essential functionality; and
- creating software construction kits for building secure, verified parsers for this syntactically simpler subset, and high-assurance translators for converting extant instances of the format to this subset.
The parser construction kits developed by SafeDocs will be usable by industry programmers who understand the syntax of electronic data formats but lack the theoretical background in verified programming. These tools will enable developers to construct verifiable parsers for new electronic data formats as well as extant ones. The tools will guide the syntactic design of new formats, by making verification-friendly format syntax easy to express, and vice versa.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., announced contracts Wednesday to Galois Inc. in Portland, Ore., and to the Northrop Grumman Corp. Technology Services segment in Herndon, Va. for the Safe Documents (SafeDocs) program.
BAE Systems to develop new cyber tools for DARPA to improve security of electronic data formats
BAE Systems will develop new cyber tools designed to help prevent vulnerabilities in electronic files that can lead to cyber attacks.
BAE Systems has been awarded a contract by the U.S. Defense Advanced Research Projects Agency (DARPA) to develop new cyber tools designed to help prevent vulnerabilities in electronic files that can lead to cyberattacks. Development of these tools will be part of DARPA’s Safe Documents (SafeDocs) program, which aims to more effectively identify and reject malicious data in a variety of electronic formats.
Every day, individuals and organizations in military, government and commercial industries receive electronic content, such as Portable Document Format (PDF) and digital media files, from unauthorized or potentially compromised sources, which creates security risks. As part of the SafeDocs program, BAE Systems’ FAST Labs™ research and development team will create two different cyber tools. The first tool seeks to recover, simplify, and automatically select safe feature subsets within electronic data formats to help encode the data safely and unambiguously, while the second is a toolkit to help software developers avoid vulnerabilities in the software they create to process complex electronic data.
“Research on the SafeDocs program will leverage BAE Systems’ expertise in cyber, algorithmic, and systems engineering domains to give developers tools that currently don’t exist in government or commercial markets to more easily and efficiently ensure the security of electronic documents,” said Anne Taylor, product line director of the Cyber Technology group at BAE Systems. “As the creation and use of electronic documents continues to grow every day, so does the risk for potential cyberattacks, making it essential we create solutions that are built with security in mind to help keep content safe.”
The research for Phase 1 of the SafeDocs program, which is being developed with funding from DARPA, adds to BAE Systems’ cyber technology portfolio. Work for the program will be completed with teammate American University and will take place at the company’s facilities in Arlington, Virginia and Burlington, Massachusetts.
Penn State to increase computer security by developing more secure parsers.
A parser, the element in a computer system that converts data inputs into an understandable format, is the first line of defense for cybersecurity. A multi-institute group of researchers that includes Gang Tan, James F. Will Career Development Associate Professor of Electrical Engineering and Computer Science and a co-hire at the Institute for Computational and Data Sciences (ICDS), has received an $8 million grant that allots $1 million for Penn State’s part of the research to increase computer security by developing more secure parsers.
The research project, “SPARTA: The Secure Parser Toolkit for Assurance,” is funded by the Defense Advanced Research Projects Agency (DARPA) and is a collaboration among Penn State and Galois Inc., Cornell University and Purdue University researchers. The role of a parser in a computer system is to take outside data inputs and convert them into internal representations. Parsers are considered a critical security piece in many systems because they should be able to identify adversarial elements and warn a system user that the program in question may be taking malicious input. However, a cyber attacker could feed malformed data that would trigger bugs in the parser to take over the system. Tan and his research team aim to create parsers that have provable guarantees about safety and are not susceptible to the many bugs that parsers commonly have now.
“There are tools you can use to manually write those parsers, but, in the end, you don’t get many guarantees,” Tan said. “You just rely on the competence of the programmers, and often, these parsers are very complex. Programmers make mistakes, and as a result those mistakes cause vulnerability in computer systems.” For example, at the time that Tan submitted his research proposal, over 1,000 parser bugs were reported for the popular suite of Mozilla products, impacting the security of many common file formats including PDF, ZIP, PNG and JPG.
Tan said that he hopes that with the creation of the SPARTA system, he will potentially be able to develop the most secure parsers to date with a novel parser language and rigorous formal methods. The researchers are focusing specifically on a program called SafeDocs that is geared toward safely opening PDFs. “PDF is a format with a lot of features, and some features are harder to handle than other features,” Tan said. “This parser would warn you if this PDF document is not obeying some safe subset of the format. If this parser agrees to open it, you’re guaranteed to be safe. There’s a provable way of saying it’s safe.” While the project’s focus is on parsers for PDF security, the researchers hope their new system can be applied to other formats, including for videos and images.
“The topic of parsing has been there since the early days of programming, but people have been mostly focusing on functionality, saying, ‘I can build a parser that can parse this kind of data,’ but haven’t paid too much attention to correctness, that is, how do you convince the world that this parser is doing the right thing?” Tan said. “And that turns out to be quite important given the cybersecurity threat. I think what is most exciting about our research is it could give some provable guarantees.” |
Resilient internetwork routing over heterogeneous mobile military networks
Journal article, Peer reviewed
MetadataShow full item record
Original versionMILCOM IEEE Military Communications Conference 2015 10.1109/MILCOM.2015.7357474
Mobile networks in the military tactical domain, include a range of radio networks with very diverse characteristics and which may be employed differently from operation to operation. When interconnecting networks with dissimilar characteristics (e.g. capacity, range, mobility) a difficult trade-off is to fully utilize the diverse network characteristics while minimizing the cost. To support the ever increasing requirements for future operations it is necessary to provide tools to quickly alter the rule-set during an ongoing operation, due to a change in operation and/or to support different needs. Our contribution is a routing protocol which targets these challenges. We propose an architecture to connect networks with different characteristics. One key point is that low capacity links/networks segments can be included in the heterogeneous network, these segments are protected from overload by controlling where and when signaling/data traffic is sent. The protocol supports traffic policing, including resource reservation. The other key point is the ability to quickly alter the network policy (rules-set) including QoS support during an operation or from operation to operation. |
Deterministic Networking (DetNet) Security Considerations
Note: This ballot was opened for revision 12 and is now closed.
Alvaro Retana No Objection
Benjamin Kaduk (was Discuss) No Objection
Comment (2021-02-26 for -15)
There are probably a couple places that currently reference only RFC 8939 where it would be appropriate to reference both 8939 and 8964. Section 1 This document is based on the premise that there will be a very broad range of DetNet applications and use cases, ranging in size scope from individual industrial machines to networks that span an entire nit: s/size scope/size and scope/ Section 4 DetNet is designed to be compatible with DiffServ [RFC2474] as applied to IT traffic in the DetNet. DetNet also incorporates the use of the 6-bit value of the DSCP field of the Type of Service (ToS) byte of the IPv4 header (or the Traffic Class byte in IPv6) for flow identification for OT traffic. [...] nit: I suggest "the use of the 6-bit value of the DCSP field of the Type of Service (IPv4) and Traffic Class (IPv6) bytes for flow identification", to avoid giving IPv4 preferred treatment. Section 5.2.3 However if there is only one queue from the forwarding ASIC to the exception path, and for some reason the system is configured such that DetNet packets must be handled on this exception path, then saturating the exception path could result in delaying or dropping of DetNet packets. nit: I suggest "such that some DetNet packets" -- it is an issue if any do, and doesn't require all of them to take the exception path. Section 6.1.1 A data-plane delay attack on a system controlling substantial moving devices, for example in industrial automation, can cause physical damage. For example, if the network promises a bounded latency of 2ms for a flow, yet the machine receives it with 5ms latency, control loop of the machine can become unstable. nit: "the control loop". Section 7.2 There are different levels of security available for integrity protection, ranging from the basic ability to detect if a header has been corrupted in transit (no malicious attack) to stopping a skilled and determined attacker capable of both subtly modifying fields in the headers as well as updating an unsigned MAC. [...] I'd suggest s/unsigned MAC/unkeyed checksum/. Section 9.2 It's a bit surprising to not see references to the (security considerations of the) MPLS control word specs like RFCs 4385 and 5586.
Erik Kline No Objection
Martin Duke No Objection
Martin Vigoureux No Objection
Murray Kucherawy No Objection
Comment (2021-01-05 for -13)
I found this to be an interesting read. Once you mentioned aircraft internals, I was even more into it. This text in the Abstract caught my eye: This document also addresses security considerations specific to the IP and MPLS data plane technologies, thereby complementing the Security Considerations sections of those documents. It almost seems appropriate for this one to formally update those if indeed they were left incomplete. I realize, however, that's not possible for an Informational document if the others are Standards Track. Besides that, some nits: Section 8.1.8: s/coexistance/coexistence/ In Section 8.1.11, there's an instance of DETNET in all-caps, while it's "DetNet" everywhere else. Section 8.1.22, a suggestion: OLD: [...] A strategy used by DetNet for providing such extraordinarily high levels of reliability is to provide redundant paths that can be seamlessly switched between, all the while maintaining the required performance of that system. NEW: [...] A strategy used by DetNet for providing such extraordinarily high levels of reliability is to provide redundant paths between which traffic can be seamlessly switched, all the while maintaining the required performance of that system.
Robert Wilton No Objection
Comment (2021-01-07 for -13)
Thanks for this document. Sorry, I've run out time to review this in detail, although I don't immediately see any manageability concerns when I scanned through the document. A few minor comments for your consideration: 1) Perhaps it might be helpful to mention remind that DetNet isn't the same as TSN in the introduction? I don't know if these are already covered, or if they are not valid problems, but I guess a couple of attacks that I would be concerned with are: (2) Overloading the exception path queue on the router. E.g., if any of the DetNet streams require/expect some packets to be punted to the control plane or software data plane for processing (OAM related perhaps), and there is a single queue from the forwarding ASIC to a control plane or software data plane, then it could be possible for Non Detnet flows to overload that shared queue such that punted packets on the DetNet flows would end up being dropped. (2b) Related to (2), if an attacker was able to overload the router or linecard CPU, e.g., through excessive management API requests, then it may be plausible that it could also cause control plane processing of packets to be dropped, or slowed down. (2c) If the control plane is being managed by a separate controller than presumably both (2) and (2b) could equally apply to getting traffic to a controller, or processing events on the controller. (3) Is there any potential issue with traffic being carried over L2 load balanced links (e.g. LAG) that apply statistical QoS. E.g., by crafting traffic on a non DetNet flow that overloads one LAG member but doesn't overload the statistical QoS guarantees. Perhaps this is outside the considerations for DetNet, or already covered by TSN. I'll leave it to the authors to determine whether any of these are valid and require further text, or if they are either already sufficiently covered, out of scope, or not valid concerns. Regards, Rob
Roman Danyliw (was Discuss) No Objection
Comment (2021-02-03 for -14)
Thank you to Yaron Sheffer for the SECDIR review. Please respond to it. Thanks for addressing my DISCUSS points and a number of my COMMENTs. === ** Section 7.4. The use of [IEEE802.1Qch-2017] is remarkably specific reference without any guidance on implementation, either here the active DetNet drafts (I checked). Please consider if this is realistic guidance without further citation on how this could be implemented.
Éric Vyncke No Objection
Comment (2021-01-07 for -13)
Thank you for the work put into this document. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. Let's also try to address the COMMENT for section 4. I hope that this helps to improve the document, Regards, -éric == COMMENTS == -- Section 1 -- In "best practices for security at both the data plane and controller plane", is there a reason why the management/telemetry plane(s) is not included? Of course, most of the time this plane is isolated from the others but anyway... Also, is is "controller plane" or "control plane" ? or is the 'controller plane' the plane connecting PCC to PCE ? (with an assumption that the ID is also relying on PCC/PCE ?). Section 8.3 (OAM) is welcome but why not already including OAM in the above sentence ? -- Section 5.2.3 & 6.3.1 -- May I assume that any layer-1 'jamming' (e.g., microwave link) is also covered by this section ? If so, then I suggest to state it. -- Section 3.3 -- "(Note that PREOF is not defined for a DetNet IP data plane)." will this note be applicable forever ? Should the word 'currently' be used in this statement? I also do not see the point of using parenthesis. I prefer the wording used in section 7.1 "At the time of this writing, PREOF is not defined for the IP data plane." -- Section 3.4 -- Probably due to my ignorance about DetNet, but, I fail to understand why "having the network shut down a link if a packet arrives outside of its prescribed time window" and the rest of the section. Again, probably due to my lack of context but you may want to explain the reasoning behind. -- Section 4 -- There is no 'TOS' field in the IPv6 header, it is replaced by 'Traffic Class'. So, please mention both of the fields. -- Section 6 -- On figure 2, there are mentions of blockchain and network slicing without any previous explanations (and I have hard time to see how blockchain traffic should be detnet). -- Section 8.3 -- This section seems to consider only OAM traffic added to the DetNet traffic while there are a couple of in-band OAM techniques currently being specified at the IETF. -- Section 9 -- If the IPsec sessions are established by a controller, then this controller could also send the Security Parameter Index (SPI) that is transmitted in the clear and use this SPI to in addition to the pair of IP addresses. == NITS == -- Section 1 -- s/A DetNet is one that/A DetNet is a network that/ -- Section 8.2 -- s/Figure 5maps/Figure 5 maps/ -- Authors -- The URL for http://www.mistiqtech.com does not work for me
(Deborah Brungard; former steering group member) Yes
( for -12)
(Alissa Cooper; former steering group member) No Objection
No Objection (2021-01-07 for -13)
I did not have time to review this document in detail but I looked at the Gen-ART review and it seems that most of the points have been addressed, thanks. I agree with other ADs that the tables in Section 6 do not make much sense or add much value. At a minimum the block chain and networking slicing columns should be removed as they are provided with no explanation and do not seem to belong with the other columns.
(Barry Leiba; former steering group member) No Objection
No Objection (2021-01-04 for -13)
It's interesting to collect security considerations into one document. We have to be careful that in doing so, we don't fall into the trap of not thinking enough about security considerations specific to later documents, once this one is published and immutable. Let's please watch for that. I'm also interested to see the discussion of Magnus's DISCUSS points. And just a few editorial comments about the Introduction: A DetNet is one that can carry data flows for real-time applications with extremely low data loss rates and bounded latency. I would spell it out first” “A Deterministic Network (DetNet) is one…” potentially bringing the OT network into contact with Information Technology (IT) traffic and security threats that lie outside of a tightly controlled and bounded area (such as the internals of an aircraft). It’s not clear from the sentence structure what “the internals of an aircraft” is meant to be an example of. Is it an example of a tightly controlled and bounded area (as it seems it would be)? Or is it outside that? And if it’s not outside that, what’s the point of using it as an example? Are you meaning to say that we have to deal with threats from outside that affect things inside the tight boundaries? Maybe it’s best to try to reword this? following industry best practices for security at both the data plane and controller plane; This should be “control plane”, shouldn't it? Also in other places in the document.
(Magnus Westerlund; former steering group member) (was Discuss) No Objection
No Objection (2021-02-02 for -14)
Thanks for addressing my issues. |
July 12, 2021
How Cloud Penetration Testing Defends Against Common Attacks
If your organization has recently migrated to the cloud or is in the process of migrating, you probably know that this major transition has the potential to introduce new vulnerabilities that can leave you exposed to cyberattacks.
Cloud penetration testing (pentesting) is a great way to proactively identify vulnerabilities in your cloud environment, enabling you to fix them before they can be exploited by attackers to compromise your valuable systems and data.
In this blog, we’ll review three of the most common vectors used for attacking cloud environments. We’ll also provide real-world examples of how Tevora’s cloud penetration testing methodology is used to identify aspects of our clients’ cloud environments that were vulnerable to these types of attacks.
On-Prem vs. Cloud Vulnerabilities
Before diving into the attack vectors, it’s worth considering some of the ways in which cloud environments differ from on-prem environments and how these differences can affect vulnerabilities to attack.
Unlike on-prem environments—which often have hosts, servers, and sub-nets dedicated to specific applications—cloud environments use “serverless” cloud-native applications that can be spread across multiple external servers and Cloud Service Providers (CSPs). While they still run on servers, serverless applications are developed in a way that abstracts the server infrastructure away from the application layer. This allows developers to focus on application logic while CSPs handle the work of provisioning and scaling the server infrastructure.
The traditional approach of defending the on-prem corporate network perimeter doesn’t work well for cloud environments in which applications run on multiple external servers and CSPs. Organizations need to work in concert with their CSPs to develop new approaches (e.g., Zero Trust architectures) to harden their cloud environment defenses.
One of the significant benefits of cloud environments is that automated tools can be used to rapidly deploy code and allocate infrastructure resources across multiple servers and CSPs while minimizing or eliminating deployment errors. However, if attackers are able to obtain the credentials required to execute these powerful tools, they can wreak havoc by rapidly deploying malware across an organization’s cloud environment. In many cases, automated cloud deployment tool administrators use generic ids and passwords and are given overly broad privileges, both of which can leave organizations vulnerable to attack.
Attack Vector 1 – Application
One potential pitfall when pentesting cloud applications is to focus on the application software alone without testing all of the supporting infrastructure tools and services that interact with the application. For example, you may have an application that performs a specific business function but relies on infrastructure tools such as S3 for storage buckets or Okta for identity management. In this case, it’s important to enumerate and test all business application functions as well as the supporting storage and identity management functions performed by S3 and OKTA. Failing to perform this type of comprehensive testing can cause you to miss key vulnerabilities.
In one of our pentesting engagements, our client had a business application that used S3 storage buckets to support an application that allowed users to request uploads of files to a cloud server. By testing both the application and the supporting S3 tools, we determined that there was insufficient input checking on the cloud server side when performing S3 uploads. This weak input checking would have enabled an attacker with access to a typical employee’s credentials to arbitrarily overwrite uploaded files. For example, it would have been possible for an attacker to overwrite the static content of an uploaded HTML file, opening the door for significant attacks within our client’s cloud environment.
Attack Vector 2 – Compromised Compute
This attack vector refers to an approach in which attackers gain access to a specific cloud compute instance and either compromise multiple applications within that instance, or pivot to gain access to other compute instances within the target organization’s cloud environment. This is often done by first gaining access to credentials on a specific compute instance, then leveraging that access to launch a broader attack. For example, in an AWS environment, an attacker might compromise a workload to gain access to temporary security credentials that allow that instance to interact with AWS.
In one engagement, we were able to compromise a client application that had remote code execution capabilities. This application leveraged agent-based configurations that were querying a global registry of information. With this access, we were able to use AWS configuration information in agent interactions to disclose other tenant configurations (including passwords). This allowed us to compromise other tenants within the client’s cloud environment.
Attack Vector 3 – Pivoting From On-Prem
Security weaknesses in on-prem environments are often exploited to compromise cloud environments. On-prem developer workstations present a target-rich environment for attackers because they frequently contain sensitive information that can be used to gain access to cloud environments. For example, developer workstations may have:
- Poorly protected Secure Shell Protocol (SSH) keys, which can be used to create proxy pivots that enable access to cloud environments.
- DevOps/automation infrastructure that, when compromised, can be used to deploy malware to cloud environments.
- Accounts/passwords that are the same as those used for a user’s cloud accounts. While this practice is strongly discouraged by most organizations, it happens.
In a recent pentesting engagement, we exploited the client’s legacy LLMNR protocol to gain remote access to their on-prem environment. With that access, we were able to upload and execute a payload that enabled us to enumerate the client’s Azure domains and targets and obtain access to on-prem domain admin and user credentials. Next, we discovered that some service and domain passwords were replicated in the client’s cloud environment, which enabled us to gain access that would have allowed an attacker to compromise a broad range of the client’s cloud applications.
Check Out Our Cloud Pentesting Webinar
For a much deeper dive into the ways cloud penetration testing can help your organization defend against common attack vectors, check out the recording of Tevora’s recent webinar on this topic.
We Can Help
If you have questions about cloud pentesting or would like help using this valuable testing approach to identify vulnerabilities in your cloud environment, Tevora’s team of security specialists can help. Just give us a call at (833) 292-1609 or email us at [email protected].
About the Author
Kevin Dick is the Manager of Threat Services at Tevora. |
In the last couple of months, we worked on malware classification and malware clustering. The results are summarized in a technical report. In the article, we introduce a learning-based framework for automatic analysis of malware behavior. To apply this framework in practice, it suffices to collect a large number of malware samples and monitor their behavior using a sandbox environment. By embedding the observed behavior in a vector space, reflecting behavioral patterns in its dimensions, we are able to apply learning algorithms, such as clustering and classification, for analysis of malware behavior. Both techniques are important for an automated processing of malware samples and we show in several experiments that our techniques significantly improve previous work in this area. For example, the concept of prototypes allows for efficient clustering and classification, while also enabling a security researcher to focus manual analysis on prototypes instead of all malware samples. Moreover, we introduce a technique to perform behavior-based analysis in an incremental way that avoids run-time and memory overhead inherent to previous approaches.
Malicious software — so called malware — poses a major threat to the security of computer systems. The amount and diversity of its variants render classic security defenses ineffective, such that millions of hosts in the Internet are infected with malware in form of computer viruses, Internet worms and Trojan horses. While obfuscation and polymorphism employed by malware largely impede detection at file level, the dynamic analysis of malware binaries during run-time provides an instrument for characterizing and defending against the threat of malicious software.
In this article, we propose a framework for automatic analysis of malware behavior using machine learning. The framework allows for automatically identifying novel classes of malware with similar behavior (clustering) and assigning unknown malware to these discovered classes (classification). Based on both, clustering and classification, we propose an incremental approach for behavior-based analysis, capable to process the behavior of thousands of malware binaries on a daily basis. The incremental analysis significantly reduces the run-time overhead of current analysis methods, while providing an accurate discovery and discrimination of novel malware variants.
The full technical report is available at http://honeyblog.org/junkyard/paper/malheur-TR-2009.pd. It was joint work with Konrad Rieck, Philipp Trinius, and Carsten Willems. And the word cloud was generated using http://www.wordle.net/. |
Understanding how data flows through PagerDuty can be tough at first. This visualization shows how basic PagerDuty concepts relate to each other.
- Events are sent to PagerDuty via an integration with an external system. This can be a monitoring tool, deployment tool, ticketing tool, etc.
- These events create incidents on PagerDuty services, which are representations of applications, components, or technical services in your environment.
- PagerDuty incidents are then assigned to whoever is on-call based on the escalation policy that is associated with the incident’s service. The escalation policy determines when and to whom incidents should be escalated if nobody responds within an escalation timeout period (i.e. 5 min).
- On-call schedules can be added directly to escalation policies to determine who should be notified based on the time of day and day of week.
- Responders (the people who are on-call in the escalation policy) are then notified about incidents based on the notification rules that are configured in their user profiles. Responders can choose to receive phone calls, text messages, email or push notifications. |
Jul 10, 2020
For those in charge of their company’s cybersecurity efforts, they may stay up at night thinking, “Who is clicking on the most links in emails? Who opens the most phishing emails? Who uses simple passwords?”
Amidst the best efforts to educate and build in best data practices throughout the organization, IT teams have no way of knowing who hasn’t followed directions until after the fact. These are examples of how hackers exploit the entry points employees use most — unstructured data. Although it can often be seen as an entry point to cybercrime, there is a way to harness the power of unstructured data (or “dark data” as it’s often referenced) to mitigate these risks and turn these datasets into one of the most powerful resources for an organization. We spoke to George Kobakhidze, lead solutions engineer at ZL Technologies, to explain this further.
Read the rest of the article at DATAVERSITY. |
500 pts endedThis question is
closed. No points were awarded.
One of the advantages of the FORZA framework is that it allows investigators the flexibility to consider non-technical issues. In a contextual manner the FORZA
model refines seizure by Answer A. exploring the accounts of information and rebuilding extracted user schema and system information in a forensically sound
manner. B. exploring the geography of the business or victim site. Servers and machine could be located in one location or in various, geographically dispersed
areas. C. exploring the network and domain infrastructure as a method for determining the extent to which this environment has been compromised. D. exploring the
legal timeframe of the case to include the monetary cost and resources required to continue the investigation. |
The first schema to use parametrized parsers is the DNS schema. DNS is a high-volume source, and using optimized parsers enables the new normalized Threat Intelligence Analytics Rules (Domains,IPs) to match your TI to even the highest volume of DNS data. And with out-of-the-box optimized parsers for a wide variety of DNS servers and clients, including Windows DNS Server, InfoBlox, Cisco Umbrella, Corelight Zeek, Google Cloud DNS, and Sysmon, you get this detection across much more of your data.
Join us to learn more about parametrized parsers in ourupcoming webinar “Turbocharging ASIM: Making Sure Normalization Helps Performance Rather Than Impacting It”on Oct 6th. Register, as usual on https://aka.ms/securitywebinars. |
Securing Distributed Computer Systems Using an Advanced Sophisticated Hybrid Honeypot Technology
Keywords:Honeypot, hybrid honeypot, virtual honeypots, malicious code, security of computer systems
AbstractComputer system security is the fastest developing segment in information technology. The conventional approach to system security is mostly aimed at protecting the system, while current trends are focusing on more aggressive forms of protection against potential attackers and intruders. One of the forms of protection is also the application of advanced technology based on the principle of baits - honeypots. Honeypots are specialized devices aimed at slowing down or diverting the attention of attackers from the critical system resources to allow future examination of the methods and tools used by the attackers. Currently, most honeypots are being configured and managed statically. This paper deals with the design of a sophisticated hybrid honeypot and its properties having in mind enhancing computer system security. The architecture of a sophisticated hybrid honeypot is represented by a single device capable of adapting to a constantly changing environment by using active and passive scanning techniques, which mitigate the disadvantages of low-interaction and high-interaction honeypots. The low-interaction honeypot serves as a proxy for multiple IP addresses and filters out traffic beyond concern, while the high-interaction honeypot provides an optimum level of interaction. The proposed architecture employing the prototype of a hybrid honeypot featuring autonomous operation should represent a security mechanism minimizing the disadvantages of intrusion detection systems and can be used as a solution to increase the security of a distributed computer system rapidly, both autonomously and in real-time.
Download data is not yet available.
How to Cite
Chovancová, E., Ádám, N., Baláž, A., Pietriková, E., Feciľak, P., Šimoňák, S., & Chovanec, M. (2017). Securing Distributed Computer Systems Using an Advanced Sophisticated Hybrid Honeypot Technology. COMPUTING AND INFORMATICS, 36(1), 113–139. Retrieved from https://www.cai.sk/ojs/index.php/cai/article/view/2017_1_113 |
Securing Removable Drives in Windows 7
With the proliferation of removable storage devices such as USB flash drives, organizations have become more and more concerned about the safety of their data.
What's to prevent a user from copying sensitive information from their work computers onto a flash drive and removing it from the premises in violation of policy? And if users are allowed to use flash drives, what happens if they lose them? Is there any way to safeguard the data stored on these drives when they fall into the wrong hands?
Microsoft Windows 7 provides a solution to both these problems. First, if your Active Directory Domain Services (AD DS) infrastructure is running on Windows Server 2008 or later, you can use Group Policy to prevent users from installing flash drives and other USB removable storage devices on their computers. And if your client computers are running Windows 7, you can use BitLocker To Go to encrypt any data stored on such devices.
Here are tips on how the new operating system can be set to block installation and also on how to manage encryption if you allow drive use.
The normal experience in Windows 7 when a user plugs a flash drive into a computer is that a balloon notification appears above the system tray (Figure 1).
Figure 1: Typical installation of a USB flash drive
Administrators who want to block automatic installation of USB storage devices on computers can do so by enabling the Prevent Installation of Removable Devices policy that is found at: Computer Configuration\Policies\Administrative Templates\System\Devices Installation\Device Installation Restrictions.
The prevent installation policy is available in AD DS domains running on Server 2008 or later and can be applied to client computers running Windows Vista or later (Figure 2).
Figure 2: Using Group Policy to prevent installation of USB removable storage devices
When the policy setting is applied to a computer running Windows 7 and the user of the computer plugs a flash drive into the computer, one of two things will happen. If the computer was recognized by the drive before the policy was applied, the drive will still be recognized and the user will be able to use it. If, however, the flash drive had never been plugged into the computer, Windows will attempt to install the device and then will display a balloon notification indicating that installation was blocked by policy (Figure 3).
Figure 3: Windows cannot install the flash drive because Group Policy is preventing it.
Before you enable this policy to block users from using USB removable storage devices, you need to be aware of one thing. If you later decide to disable the policy setting to allow such devices, any devices previously blocked from use will not automatically be recognized on the computers. Instead, the Devices And Printers window will display the previously blocked devices as “unspecified mass storage devices.”
To get these devices to work properly, the user will need to right-click on the listed device and select Troubleshoot (Figure 4).
Figure 4: Troubleshooting a USB removable storage device that won't automatically install
Doing this runs the Devices and Printers troubleshooter, which after examining the device will prompt the user to install the appropriate driver (Figure 5).
Figure 5: Troubleshooting an unrecognized mass storage device
Once the driver has been installed, the device will be properly recognized in the Devices and Printers window (Figure 6).
Figure 6: The device has been properly recognized.
Because of this process, be sure to carefully plan before implementing this policy setting in your domain.
Encrypting Removable Devices
Windows 7 now provides an additional capability that can help organizations safeguard their data should they decide to allow use of flash drives and other USB removable storage devices. This new feature, BitLocker To Go, extends the BitLocker Drive Encryption first introduced in Windows Vista to include removable drives, rather than just fixed disks.
To see how this works, start by plugging a flash drive into your computer to make sure it is recognized and that drivers are installed. Then click the Start button, type “bitlocker” in the search box, and click Manage BitLocker from the search results. (This approach is faster than browsing Control Panel — really.) Now, the BitLocker Drive Encryption window opens (Figure 7).
Figure 7: Configuring BitLocker and BitLocker To Go
To encrypt the flash drive, the click Turn On BitLocker. Once BitLocker initializes the drive, the user is prompted to select the method to be used for unlocking the encrypted drive, which can be either a password or a smartcard. The user then prompted must save or print the recover key for the drive, which is needed to recover data should the password be forgotten or smartcard lost. The drive is then encrypted, which can take several minutes or longer depending on drive size.
When the encrypted flash drive is removed and then re-inserted into the computer, the user is prompted to supply the decryption password or smartcard (Figure 8).
Figure 8: A password must be supplied to decrypt the flash drive once it has been encrypted.
The encrypted flash drive also contains an application called BitLocker To Go Reader (bitlockertogo.exe) so that if you plug the drive into a computer running Windows Vista or even Windows XP, you can open encrypted files stored on the drive (Figure 9). If you copy the files to your computer, the local versions of these files will be decrypted so you can modify them. The files on the flash drive will remain encrypted, however.
Figure 9: Using BitLocker To Go Reader on a Windows XP computer
Administrators can also configure how BitLocker To Go works using Group Policy. The policy settings for doing so are found at: Computer Configuration\Policies\Administrative Templates\Windows Components\BitLocker Drive Encryption\Removable Data Drives.
For example, you can use the Choose How BitLocker-Protected Removable Drives Can Be Recovered feature to set several recovery policies:
- whether data recovery agents can be used;
- whether users are allowed or required to generate a 48-digit recovery passwords and/or 256-bit recovery keys;
- whether recovery information should be stored in AD DS;
- whether to back up either the recovery password and key package or just the password (Figure 10).
Figure 10: Using Group Policy to specify how removable drives protected using BitLocker can be recovered |
NSA last week released guidance for securing their communication systems, specifically Unified Communications (UC) and Voice and Video over IP (VVoIP).
Unified Communications (UC) and Voice and Video over IP (VVoIP) call-processing systems provide enterprises communications and collaboration tools, they combine voice, video conferencing, and instant messaging in a unique workplace. These platforms are widely used in government agencies and by organizations in the supply chain of several government offices, for this reason, the agency wants to support them in securing their infrastructure.
However, these tools enlarge the surface of attack of the organizations the use them, threat actors could exploit vulnerabilities and misconfiguration to take over the network of a target infrastructure.
Attackers could target these systems to deliver malware, impersonate users, eavesdrop on conversations, conduct fraud, and more.
“However, the same IP infrastructure that enables UC/VVoIP systems also extends the attack surface into an enterprise’s network, introducing vulnerabilities and the potential for unauthorized access to communications. These vulnerabilities were harder to reach in earlier telephony systems, but now voice services and infrastructure are accessible to malicious actors who penetrate the IP network to eavesdrop on conversations, impersonate users, commit toll fraud, or perpetrate a denial of service effects.” reads the guidance published by the NSA. “Compromises can lead to high-definition room audio and/or video being covertly collected and delivered using the IP infrastructure as a transport mechanism.
The guide is separated into four parts and provides for each of them mitigations and best practices to use implement. The four parts are:
The guide urges a security by design for these tools, detailed planning and deployment activities, and recommends continuous testing and maintenance.
The NSA recommends using VLANs to limit lateral movement between UC/VVoIP systems and the data network, and to place access controls on the type of traffic. The agency also recommends implementing layer 2 protections, implementing authentication mechanisms for all UC/VVoIP connections and implementing an effective patch management process.
The guide recommends the adoption of authentication and encryption for signaling and media traffic, the deployment of fraud detection solutions, the enforcement of physical security for the systems composing the platforms, and the use of solutions for detecting and prevent DoS attacks.
The agency also recommends testing the infrastructure every time a new device has to be added in the operational networks.
“Using the mitigations and best practices explained here, organizations may embrace the benefits of UC/VVoIP while minimizing the risk of disclosing sensitive information or losing service.” concludes the guide.
The NSA agency has also released an information sheet that summarizes the guide and the recommendation it includes:
|[adrotate banner=”9″]||[adrotate banner=”12″]|
(SecurityAffairs – hacking, Unified Communications) |
Software reverse engineering, the art of pulling programs apart to figure out how they work, is what makes it possible for sophisticated hackers to scour code for exploitable bugs. It's also what allows those same hackers' dangerous malware to be deconstructed and neutered. Now a new encryption trick could make both those tasks much, much harder.
At the SyScan conference next month in Singapore, security researcher Jacob Torrey plans to present a new scheme he calls Hardened Anti-Reverse Engineering System, or HARES. Torrey's method encrypts software code such that it's only decrypted by the computer's processor at the last possible moment before the code is executed. This prevents reverse engineering tools from reading the decrypted code as it's being run. The result is tough-to-crack protection from any hacker who would pirate the software, suss out security flaws that could compromise users, and even in some cases understand its basic functions.
"This makes an application completely opaque," says Torrey, who works as a researcher for the New York State-based security firm Assured Information Security. "It protects software algorithms from reverse engineering, and it prevents software from being mined for vulnerabilities that can be turned into exploits."
A company like Adobe or Autodesk might use HARES as a sophisticated new form of DRM to protect their pricey software from being illegally copied. On the other hand, it could also mean the start of a new era of well-armored criminal or espionage malware that resists any attempt to determine its purpose, figure out who wrote it, or develop protections against it. As notable hacker the Grugq wrote on twitter when Torrey's abstract was posted to SyScan's schedule, HARES could mean the "end of easy malware analysis. :D"
To keep reverse engineering tools in the dark, HARES uses a hardware trick that's possible with Intel and AMD chips called a Translation Lookaside Buffer (or TLB) Split. That TLB Split segregates the portion of a computer's memory where a program stores its data from the portion where it stores its own code's instructions. HARES keeps everything in that "instructions" portion of memory encrypted such that it can only be decrypted with a key that resides in the computer's processor. (That means even sophisticated tricks like a "cold boot attack," which literally freezes the data in a computer's RAM, can't pull the key out of memory.) When a common reverse engineering tool like IDA Pro reads the computer's memory to find the program's instructions, that TLB split redirects the reverse engineering tool to the section of memory that's filled with encrypted, unreadable commands.
"You can specifically say that encrypted memory shall not be accessed from other regions that aren’t encrypted," says Don Andrew Bailey, a well-known security researcher for Lab Mouse Security, who has reviewed Torrey's work.
Many hackers begin their reverse engineering process with a technique called "fuzzing." Fuzzing means they enter random data into the program in the hopes of causing it to crash, then analyze those crashes to locate more serious exploitable vulnerabilities. But Torrey says that fuzzing a program encrypted with HARES would render those crashes completely unexplainable. "You could fuzz a program, but even if you got a crash, you wouldn’t know what was causing it," he says. "It would be like doing it blindfolded and drunk."
Torrey says he intends HARES to be used for protection against hacking---not for creating mysterious malware that can't be dissected. But he admits that if HARES works, it will be adopted for offensive hacking purposes, too. "Imagine trying to figure out what Stuxnet did if you couldn’t look at it," he says. "I think this will change how [nation-state] level malware can be reacted to."
HARES's protections aren't quite invincible. Any program that wants to use its crypto trick needs to somehow place a decryption key in a computer's CPU when the application is installed. In some cases, a super-sophisticated reverse engineer could intercept that key and use it to read the program's hidden commands. But snagging the key would require him or her to plan ahead, with software that's ready to look for it. And in some cases where software comes pre-installed on a computer, the key could be planted in the CPU ahead of time by an operating system maker like Apple or Microsoft to prevent its being compromised. "There are some concerns with this from a technical point of view," says Bailey. "But it’s way better than anything we have out there now." |
Microsoft to disable Macros 4.0
Microsoft has revealed its plan to disable Excel 4.0 macros or XLM macros for all Microsoft 365 users in a recent email sent out to its customers.
First introduced back in 1992 with the release of Excel 4.0, XLM macros allow users of the company’s spreadsheet software to enter complex formulas inside Excel cells capable of executing commands both in the program itself and in a Windows computer’s local file system. Although XLM macros were replaced by VBA-based macros when Excel 5.0 was released, Microsoft has continued supporting this legacy feature over the years.
Although macros are convenient for Excel users, they have also been repeatedly abused by cybercriminals in their attacks. This is because, once enabled in a malicious document, they can give a threat actor additional control over a user’s system to install malware or carry out other attacks.
With more people working from home than ever before last year, there was a huge uptick in the number of malware strains and cybercriminals abusing XLM macros in their attacks. Things got so bad that Microsoft even went to the trouble of adding XLM macro support to Microsoft 365’s Antimalware Scan Interface (AMS) in March of this year in an effort to help antivirus software deal with these kinds of attacks.
The company laid out its plan to disable the feature across three stages according to The Record. The feature will be disabled by default for Microsoft 365 Insiders beginning at the end of this month, those on the current channel will see it disabled in early November and the Monthly Enterprise Channel (MEC) will have XLM macros disabled by default in December.
These efforts may not be enough for security researchers though as they are now asking Microsoft to also disable VBA macros as default. |
A visual warning system for the identification of proximity detection events around a continuous mining machine.
Jobes-CC; Carr-JL; Reyes-MA
Proceedings of the Human Factors and Ergonomics Society 57th Annual Meeting, September 30-October 4, 2013, San Diego, California. Santa Monica, CA: Human Factors and Ergonomics Society, 2013 Sep; 57:265-269
Underground mobile mining machines pose a difficult safety challenge since their operators generally work in close proximity to these machines in very restricted spaces. Intelligent software for use with electromagnetic proximity detection systems has been developed that can accurately locate workers around mining machinery in real time. If a worker is located too close to the machine, the machine's operation can be partially or completely disabled to protect the workers from striking, pinning, and entanglement hazards. Researchers have developed a visual method of relaying to the operators the interdiction of their machine operations by this intelligent proximity detection system. Several lighting sequence scenarios were human subject tested for effectiveness using a computer-based multimedia platform. Analysis of the test results indicates that a "fast flash" lighting arrangement is the most effective scenario based upon subject preference, rating, and accuracy of proximity intrusion location
Miners; Personal-protection; Personal-protective-equipment; Humans; Men; Underground-mining; Underground-miners; Machine-operators; Electromechanical-systems; Workers; Work-areas; Engineering-controls; Robotics; Injury-prevention; Accident-prevention
Proceedings of the Human Factors and Ergonomics Society 57th Annual Meeting, September 30-October 4, 2013, San Diego, California |
In this chapter we looked at the life cycle of an error in PHP. We examined the behavior of the default error handler and the ways in which it can be configured to suit the needs of most programs. Twelve distinct error levels were coveredfrom informative, to actionable, and fatal. We also took a look at the concept of user-defined error handlers and intentionally triggered errors.
Through the use of the concepts covered in this chapter, you should be able to design applications that successfully ignore expected errors and safely handle unexpected ones. With careful planning and a solid implementation, your users should never have to see another cryptic system-generated error message again. |
Amazon Web Services (AWS) gives you a lot of options for deploying applications and securing your resources. This course will give you experience using the best methods of deployment and how to use AWS security services to protect your account.
AWS gives developers a lot of options, but it can be overwhelming to know the best way to deploy applications or how to secure your resources. In this course, AWS Developer: Deployment and Security, you will gain the ability to effectively deploy applications to AWS and secure your AWS infrastructure. First, you will learn how to efficiently deploy resources and applications. Then, you will explore how to secure your resources in a VPC. Finally, you will discover how to use Users, Groups, and Roles to give permissions to your resources. When you’re finished with this course, you will have the skills and knowledge of AWS deployment and security needed to ensure your AWS resources are secure and maintainable.Topics:
- Course Overview
- Deploying and Security on AWS
- Deploying Applications to AWS
- Coordinating Services in AWS
- Securing Infrastructure in AWS
- Managing Access to AWS |
Recently, there has been much interest in using radiometric identification (also known as wireless fingerprinting) for the purposes of authentication. Previous work has shown that using radiometric identification can discriminate among devices with a high degree of accuracy when simultaneously using multiple radiometric characteristics. Additionally, researchers have noted the potential for wireless fingerprinting to be used for more devious purposes, specifically that of privacy invasion or compromise. In fact, any such radiometric characteristic that is useful for authentication is useful for privacy compromise. To date, there has not been any proposal of how to mitigate such privacy loss for many of these radiometric characteristics, and specifically no such proposal for how to mitigate such privacy loss in a low-cost manner. In this paper, we investigate some limits of an attacker's ability to compromise privacy, specifically an attacker that uses a transmitter's carrier frequency. We propose low-cost mechanisms for mitigating privacy loss for various radiometric characteristics. In our development and evaluation, we specifically consider a vehicular network (VANET) environment. We consider this environment in particular because VANETs will have the potential to leak significant, longterm information that could be used to compromise drivers' personal information such as home address, work address, and the locations of any businesses the driver frequents. While tracking a vehicle using visually observable information (e.g., license plates) to obtain personal information is possible, such means require line-of-sight, whereas radiometric identification would not. Finally, we evaluate one of our proposed mechanisms via simulation. Specifically, we evaluate our carrier frequency switching mechanism, comparing it to the theory we develop, and we show the precision with which vehicles will need to switch their physical layer identities given our parameterization for VANETs. |
ARM’s highest performing processor, extending the capabilities of mobile and enterprise computing. Read More...
Java offers an efficient framework for developing and deploying enterprise and server or client-side applications. However, being a interpreted language, its bytecode contains highly detailed metadata that makes compiled applications easy to reverse engineer, tamper and pirate. Once Java applications are deployed, hackers and competitors have easy access to the source code and the embedded intellectual property (IP) within the applications themselves. For example, IP and personally identifiable information (PII) that is embedded in Java applications is susceptible to theft via reverse engineering. Furthermore, malware has traveled up the stack to the application layer. Hence, enterprises are seeing an increasing need to protect applications against many forms of tampering. Today’s threat environment requires resilient software protection solutions that reside at the application layer to ensure against IP theft, malware invasion and/or unauthorized access.
Types of Applications that Require Protection Include:
• Java Mobile and Desktop Applications - Distributed desktop applications that are written in Java are susceptible to static and dynamic analysis attacks. These applications suffer from the same inherent reverse engineering issues as mobile and desktop applications written in other languages except bytecode is even easier to decompile. Supports Blackberry and Android.
• Web Applications with Server side business logic - Thin client based web applications where logic in the Web/Business/System Tier is susceptible to theft, malware insertion, unauthorized access to authentication credentials and keys. |
Follow these steps to verify that Palo Alto Networks
URL Filtering services categorize and enforce policy on URLs as
To test your URL Filtering and Advanced URL
Filtering policy configurations, use Palo Alto Networks URL Filtering Test Pages.
Test pages have been created for the safe testing of all predefined URL categories,
including real-time-detection categories applicable only to firewalls
running advanced URL filtering.
You must enable SSL decryption
for test pages to work over an HTTPS connection.
URL filtering test pages contain “real-time-detection” in the URL
and confirm that firewalls correctly categorize and analyze malicious
URLs in real-time. They do not verify firewall behavior for all
You can check
the classification of a specific website using Palo Alto Networks
URL category lookup tool, Test A Site.
the procedure corresponding to your URL Filtering subscription:
If you have the legacy URL Filtering subscription,
follow the steps below to test and verify that the firewall correctly
categorizes, enforces, and logs URLs in the categories that you
Access a website in the URL category of interest.
Consider testing sites in blocked URL categories. You can
use a test page (urlfiltering.paloaltonetworks.com/test-
to avoid directly accessing a site. For example, to test your block
policy for malware, visit https://urlfiltering.paloaltonetworks.com/test-malware.
Verify that your firewall processes the site correctly.
For example, if you configured a block page to display
when someone accesses a site that violates your organization’s policy,
check that one appears when you visit the test site.
Review the Traffic and URL Filtering logs
to confirm that the URLs have been properly categorized and the
correct policy rule is logged.
Verify Advanced URL Filtering
If you have an Advanced URL Filtering subscription,
follow the steps below to test and verify that real-time URL analysis
Palo Alto Networks recommends setting
the real-time-detection action setting to alert for your active
URL filtering profiles. This provides visibility into URLs analyzed in
real-time and will block (or allow, depending on your policy settings)
based on the category settings configured for specific web threats.
firewall enforces the most severe action of the actions configured
for detected URL categories of a given URL. For example, suppose
example.com is categorized as real-time-detection, command-and-control,
and shopping—categories with an alert, block, and allow action configured,
respectively. The firewall will block the URL because block is the most
severe action from the detected categories.
Verify that URLs are being analyzed and categorized
using the advanced URL Filtering service.
Visit each of the following test URLs to
verify that the advanced URL Filtering service is properly categorizing
Monitor the activity on the firewall to verify that
the tested URLs have been properly categorized as real-time-detection.
(url_category_list contains real-time-detection)
view logs that have been analyzed using advanced URL filtering.
Additional web page category matches are also displayed and corresponds
to the categories as defined by PAN-DB.
Take a detailed look at the logs to verify that each type
of web threat is correctly analyzed and categorized. In the example
below, the URL is categorized as having been analyzed in real-time,
and, additionally, as possessing qualities that define it as command
and control. Because C&C has a more severe action compared to
real-time-detection (block as opposed to alert), this URL has been
categorized as command and control and has been blocked. |
Australian Cyber Security Center Advisory 2020-008: TTPs
On 19 June 2020, Prime Minister Scott Morrison addressed the nation about malicious cyber activity against Australian networks. “We know it is a sophisticated state-based cyber actor because of the scale and nature of the targeting and the tradecraft used,” he said, “Our Government’s expert agency on Cybersecurity is the Australian Cyber Security Center and it’s already published a range of technical advisories.”
Specifically, in its Advisory 2020-008 the Australian Cyber Security Center published tactics, techniques and procedures (TTPs) used to target multiple Australian networks, focusing on the MITRE ATT&CK framework of known adversary TTPs. For years the Australian Cyber Security Center (ACSC) has used the MITRE Common Vulnerabilities Exposure (CVE) framework to mitigate risk in operating systems.
What is MITRE ATT&CK?
MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.
Register below if you want to ensure you’re able to detect the ACSC TTPs associated with the PMs announcement. |
A Registry is the body that is responsible for assigning domain names and for managing domains and the relative technical infrastructure needed, under a particular extensions (.it, .eu, .com, etc.). The rules of the network are fixed by an international organization, Icann (Internet Corporation for Assigned Names and Numbers), which is also responsible for appointing certain bodies to carry out the functions of a Registry (technically, a Registry is 'delegated' by ICANN) for managing various extensions (.it, .fr, .com, etc). In 1987, ICANN appointed the Italian National Research Council (CNR) to manage .it domains. This is how the it. Registry was founded, which has its offices at the Institute of Informatics and Telematics of the CNR in Pisa. |
Table of Contents
Share this entry
AWS Security Tools & Services
At the start of 2022, Amazon Web Services held a 33% market share, making it the most used enterprise cloud platform. However, with greater use also comes greater opportunity for risk (AWS customers have found themselves in the midst of many data breaches over the past years.)
AWS enables enterprises to innovate and distribute data with unmatched effectiveness, but all that data — and the applications holding it — need sufficient protection.
This is why AWS works hard to ensure secure hardware and infrastructure to secure their platform and to safeguard things like customer information and allow business continuity. However, AWS operates under a Shared Responsibility Model, meaning the customer is responsible for securing everything within their cloud. This includes many services and customizable configurations to use your cloud intelligently.
AWS provides a number of security tools and services to help make your life easier when it comes to securing your cloud. In this blog we’ll introduce some AWS security services, tools, and solutions that you can leverage as a customer.
What are AWS Security Tools and Services?
First, what are AWS security tools and services? They are a variety of services provided by AWS that sprawl across several realms of security including data protection, identity and access management, infrastructure security, and threat detection and continuous monitoring.
Data protection. AWS recognizes the importance of securing data and making sure it is not lost in transfer. Their services help you meet core security, confidentiality, and compliance requirements. Features include things encryption, data duplication and data monitoring. An example of a data protection service provided by AWS is Amazon Macie.
Identity & Access Management. AWS recognizes the need for managing Identities so they provide an extensive list of tools and services to help you manage identity in the cloud. Overall, the goal is to control the resources and actions identities can use and manipulate.
Infrastructure Protection. Infrastructure protection is a critical component of information security and helps ensure that everything within your workload is safe from vulnerability exploitation or unintended access. While infrastructure is largely managed by AWS itself, they also provide some additional resources for managing the security of configurable infrastructure, e.g. AWS WAF.
Threat Detection. When in the cloud, you need constant reassurance that your security posture is strong and you have all the right configurations in place to optimize security. AWS provides services that increase visibility into your deployment and operations and also monitor identity behavior to help detect threats. An example is Amazon GuardDuty.
Account vs. Application vs. Service Security on AWS
One thing to note about AWS services and tools is that there are differences in what these resources are helping to protect. AWS differentiates between account security and application and service security.
Account: Securing an identity, be it a person or non-person identity requires a different approach to security. This is where cloud IAM practices shine, as AWS encourages controlling identities’ ability to access sensitive data or manipulate privileges. This can help prevent concerns like privilege escalation, excessive permissions, or poor hygiene around admin users. An example would be AWS IAM, a service providing security practices like SSO or MFA and assigning and managing the permissions of identities in your cloud.
Application & Service: Applications and services within AWS are susceptible to threats like external attacks from bad-actors or even vulnerabilities existing from the development process, so they require their own breed of security resources. An example would be Amazon Inspector, a service intended for vulnerability management of applications deployed on EC2.
Now that we’ve reviewed the different purposes of AWS security tools and the different types, let’s apply that information and explore the top services and features customers can use today.
Top 14 AWS Security Tools
AWS Security Hub
Detection & Monitoring. AWS Security Hub is a cloud security posture management service that performs automated, continuous security best practice checks against your AWS resources. It aggregates your security alerts (findings) in a standardized format so that you can easily take action. Security Hub makes it simple to understand and improve your security posture with automated integrations to AWS partner products. Many roles may find themselves tasked with managing secure use of the cloud, but in particular this may be used by Cloud Security Analysts.
Detection and Monitoring. Amazon GuardDuty protects stored data, AWS accounts, and workloads by monitoring DNS logs, event logs, and other data. Data is analyzed to detect anomalous behavior and present it in a centralized location. Security & SecurityOps teams would use this service.
Detection and & Monitoring. AWS Config will constantly evaluate your cloud configurations and detect changes that fall out of policy. This is extremely useful when making configuration changes to resources and ensuring opportunities don’t appear for data breaches. Security Analysts and Cloud Security teams would be the target audience.
Detection and Monitoring. AWS Inspector is an assessment service for apps deployed on EC2 instances. The security assessments include CIS benchmarks, possible exposures or vulnerabilities (CVEs), or just general security best practices like disabling root logins for SSH. This is useful for DevSecOps teams or Security Analysts.
Detection and Monitoring. CloudTrail monitors all behavior in your environment. This includes any action an Identity takes and all API calls. This helps you review and detect any inappropriate or suspicious behavior. There is an additional AWS CloudTrail Insights you can add-on to receive alerts when abnormal activity is detected.
Detection and Monitoring. Similar to CloudTrail in its monitoring services, CloudWatch observes resources and application activity. It collects logs and event data to help detect any anomalous behavior, improve operationalization, or help with performance monitoring.
Infrastructure Protection. AWS Shield protects all your applications running on AWS from DDoS attacks, or Denial-of-Service attacks. This essentials protects the perimeter of your application. The audience for this service includes DevSecOps and cloud admins.
AWS Web Application Firewall
Infrastructure Protection. AWS WAF helps protect against web applications being exposed to the internet and therefore vulnerable to exploit. It will detect and mitigate attacks like SQL injections. It comes with default rules, but your team can also customize your own settings. Recommended for Cloud, Network or Security Admins.
AWS Identity Access Management (IAM)
Identity & Access Management. AWS IAM provides identity and access controls across the environment. Specifically, it offers granular control over what identities (person and non-person) can access and perform. Typical users of this may be IT Managers or Cloud Admins.
AWS IAM Analyzer
Identity & Access Management. Building off of the insights and controls AWS IAM provides, the complexities of managing the permissions of identities can get unruly. IAM Analyzer allows for a clearer picture of these access patterns to help remove excessive privileges and work towards least privilege.
Identity and Access Management. Amazon Cognito helps manage customer access in web and mobile applications at scale. The offerings include identity federation, user sign-in features, and access controls. Some use cases include giving customers flexible sign-ins or role-based access to AWS resources.
Data Protection. Amazon Macie helps secure Amazon S3 buckets. It uses machine learning and pattern matching to detect sensitive data in S3 buckets. This alerts you to things like lack of encryption or publicly accessible data. This would be particularly useful to anyone responsible for compliance.
AWS Secrets Manager
Data Protection. Secrets Manager will help you better protect sensitive information or secrets that allow access to services and databases in your environment. If you need to access a secret, you can create an API call to retrieve the information from the Secrets Manager API. This tool would be useful to Development Teams or Admins.
Compliance. AWS Artifact is a go-to location for all things compliance related information. This includes receiving on-demand compliance reports from AWS and third-parties, as well as managing or accepting and regulations. An excellent use case would be assisting in the auditing process.
Enhance AWS Security Tools with Sonrai Security
Amazon Web Services has put out extensive services and tools to help your teams secure your cloud. That being said, AWS is a cloud provider, not a security provider. At this point in time it is widely accepted that leaning on 3rd party cloud security platforms is the best way to elevate your cloud security past the limitations that native-tooling provides.
Sonrai Security starts at the core of your business — your most sensitive applications and data – and works outwards to secure it my managing cloud identities and their entitlements. The patented identity and permission analytics can compute the end-to-end permissions of every identity — even the ones you can’t see because they’re indirectly inherited via complex identity chains.
This insight, matched with AWS services makes an unbeatable team for reducing cloud risk and remediating threats.
Interested in seeing the Sonrai platform in action? Watch an on-demand demo, or request a personalized one for your needs.
AWS offers a wide range of security tools and services. For IAM solutions you can look into Amazon Cognito, AWS IAM Analyzer, and AWS IAM; For data protection look into Amazon Macie & Amazon Secrets Manager; For detection and response look into Amazon CloudTrail and CloudWatch. For a complete list, see our blog.
The Newsletter for Cloud Security Leaders. 1x a month.
Get a Comprehensive Cloud Identity AuditRequest Your Audit
Read the latest news and insights
- Cloud Security Platform
- By Use Case |
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
Weiterlesen weniger lesen |
From Wiki of WFilter NG Firewall
1 IP-MAC Binding
This module enables you to bind an ip address to a MAC address. Please notice:
- When "ip-mac binding" is enabled, WFilter NGF DHCP server will assign static ip addresses to clients.
- WFilter NGF does not act as a DHCP server when deployed as a network bridge.
- If you have another dhcp server, for "ip-mac binding" to work properly, please modify your DHCP server to assign listed static ip addresses to clients.
- If you want to apply binding to clients connected with a three layer switch, you need to enable "MAC Detector".
- For unlisted IPs, you can choose to:
- "Block All". No internet access for unlisted IP addresses.
- "Allow All". Allow internet access for unlisted IP addresses.
- "Block below IP". Local IP address belongs to the IP ranges will be blocked.
- For unlisted MAC addresses, you can set each lan subnet to assign IP address or not.
- "Disable". Do not assign IP to unlisted MAC address.
- "Enable". Assign IP to unlisted MAC address.
3 IP-MAC List
- Click the "state" icon, you can turn on/off the binding.
Please notice: even a binding is in "off" state, static ip address will still be assigned by WFilter's DHCP.
4 Import & Remove
- Scan and Import: scan local ip & mac list for importing.
- Import List: import a pre-defined IP & mac list.
- Delete: delete ip-mac list |
Synopses & Reviews
Keep black-hat hackers at bay with the tips and techniques in this entertaining, eye-opening book! Developers will learn how to padlock their applications throughout the entire development process—from designing secure applications to writing robust code that can withstand repeated attacks to testing applications for security flaws. Easily digested chapters reveal proven principles, strategies, and coding techniques. The authors—two battle-scarred veterans who have solved some of the industry’s toughest security problems—provide sample code in several languages. This edition includes updated information about threat modeling, designing a security process, international issues, file-system issues, adding privacy to applications, and performing security code reviews. It also includes enhanced coverage of buffer overruns, Microsoft .NET security, and Microsoft ActiveX development, plus practical checklists for developers, testers, and program managers.
Covers topics such as the importance of secure systems, threat modeling, canonical representation issues, solving database input, denial-of-service attacks, and security code reviews and checklists.
Includes bibliographical references (p. 741-745) and index.
About the Author
Michael Howard, CISSP, is a leading security expert. He is a senior security program manager at Microsoft® and the coauthor of The Software Security Development Lifecycle. Michael has worked on Windows security since 1992 and now focuses on secure design, programming, and testing techniques. He is the consulting editor for the Secure Software Development Series of books by Microsoft Press.
David LeBlanc, Ph.D., is a founding member of the Trustworthy Computing Initiative at Microsoft®. He has been developing solutions for computing security issues since 1992 and has created award-winning tools for assessing network security and uncovering security vulnerabilities. David is a senior developer in the Microsoft Office Trustworthy Computing group.
Table of Contents
Copyright; Dedication; Introduction; Who Should Read This Book; Organization of This Book; Installing and Using the Sample Files; System Requirements; Support Information; Acknowledgments; Part I: Contemporary Security; Chapter 1: The Need for Secure Systems; 1.1 Applications on the Wild Wild Web; 1.2 The Need for Trustworthy Computing; 1.3 Getting Everyones Head in the Game; 1.4 Some Ideas for Instilling a Security Culture; 1.5 The Attackers Advantage and the Defenders Dilemma; 1.6 Summary; Chapter 2: The Proactive Security Development Process; 2.1 Process Improvements; 2.2 The Role of Education; 2.3 Design Phase; 2.4 Development Phase; 2.5 Test Phase; 2.6 Shipping and Maintenance Phases; 2.7 Summary; Chapter 3: Security Principles to Live By; 3.1 SD3: Secure by Design, by Default, and in Deployment; 3.2 Security Principles; 3.3 Summary; Chapter 4: Threat Modeling; 4.1 Secure Design Through Threat Modeling; 4.2 Security Techniques; 4.3 Mitigating the Sample Payroll Application Threats; 4.4 A Cornucopia of Threats and Solutions; 4.5 Summary; Part II: Secure Coding Techniques; Chapter 5: Public Enemy #1: The Buffer Overrun; 5.1 Stack Overruns; 5.2 Heap Overruns; 5.3 Array Indexing Errors; 5.4 Format String Bugs; 5.5 Unicode and ANSI Buffer Size Mismatches; 5.6 Preventing Buffer Overruns; 5.7 The Visual C++ .NET /GS Option; 5.8 Summary; Chapter 6: Determining Appropriate Access Control; 6.1 Why ACLs Are Important; 6.2 What Makes Up an ACL?; 6.3 A Method of Choosing Good ACLs; 6.4 Creating ACLs; 6.5 Getting the ACE Order Right; 6.6 Be Wary of the Terminal Server and Remote Desktop SIDs; 6.7 NULL DACLs and Other Dangerous ACE Types; 6.8 Other Access Control Mechanisms; 6.9 Summary; Chapter 7: Running with Least Privilege; 7.1 Least Privilege in the Real World; 7.2 Brief Overview of Access Control; 7.3 Brief Overview of Privileges; 7.4 Brief Overview of Tokens; 7.5 How Tokens, Privileges, SIDs, ACLs, and Processes Relate; 7.6 Three Reasons Applications Require Elevated Privileges; 7.7 Solving the Elevated Privileges Issue; 7.8 A Process for Determining Appropriate Privilege; 7.9 Low-Privilege Service Accounts in Windows XP and Windows .NET Server 2003; 7.10 The Impersonate Privilege and Windows .NET Server 2003; 7.11 Debugging Least-Privilege Issues; 7.12 Summary; Chapter 8: Cryptographic Foibles; 8.1 Using Poor Random Numbers; 8.2 Using Passwords to Derive Cryptographic Keys; 8.3 Key Management Issues; 8.4 Key Exchange Issues; 8.5 Creating Your Own Cryptographic Functions; 8.6 Using the Same Stream-Cipher Encryption Key; 8.7 Bit-Flipping Attacks Against Stream Ciphers; 8.8 Reusing a Buffer for Plaintext and Ciphertext; 8.9 Using Crypto to Mitigate Threats; 8.10 Document Your Use of Cryptography; 8.11 Summary; Chapter 9: Protecting Secret Data; 9.1 Attacking Secret Data; 9.2 Sometimes You Dont Need to Store a Secret; 9.3 Getting the Secret from the User; 9.4 Protecting Secrets in Windows 2000 and Later; 9.5 Protecting Secrets in Windows NT 4; 9.6 Protecting Secrets in Windows 95, Windows 98, Windows Me, and Windows CE; 9.7 Not Opting for a Least Common Denominator Solution; 9.8 Managing Secrets in Memory; 9.9 Locking Memory to Prevent Paging Sensitive Data; 9.10 Protecting Secret Data in Managed Code; 9.11 Raising the Security Bar; 9.12 Trade-Offs When Protecting Secret Data; 9.13 Summary; Chapter 10: All Input Is Evil!; 10.1 The Issue; 10.2 Misplaced Trust; 10.3 A Strategy for Defending Against Input Attacks; 10.4 How to Check Validity; 10.5 Using Regular Expressions for Checking Input; 10.6 Regular Expressions and Unicode; 10.7 A Regular Expression Rosetta Stone; 10.8 A Best Practice That Does Not Use Regular Expressions; 10.9 Summary; Chapter 11: Canonical Representation Issues; 11.1 What Does Canonical Mean, and Why Is It a Problem?; 11.2 Canonical Filename Issues; 11.3 Canonical Web-Based Issues; 11.4 Visual Equivalence Attacks and the Homograph Attack; 11.5 Preventing Canonicalization Mistakes; 11.6 Web-Based Canonicalization Remedies; 11.7 A Final Thought: Non-File-Based Canonicalization Issues; 11.8 Summary; Chapter 12: Database Input Issues; 12.1 The Issue; 12.2 Pseudoremedy #1: Quoting the Input; 12.3 Pseudoremedy #2: Use Stored Procedures; 12.4 Remedy #1: Never Ever Connect as sysadmin; 12.5 Remedy #2: Building SQL Statements Securely; 12.6 An In-Depth Defense in Depth Example; 12.7 Summary; Chapter 13: Web-Specific Input Issues; 13.1 Cross-Site Scripting: When Output Turns Bad; 13.2 Other XSS-Related Attacks; 13.3 XSS Remedies; 13.4 Dont Look for Insecure Coooooonstructs; 13.5 But I Want Users to Post HTML to My Web Site!; 13.6 How to Review Code for XSS Bugs; 13.7 Other Web-Based Security Topics; 13.8 Summary; Chapter 14: Internationalization Issues; 14.1 The Golden I18N Security Rules; 14.2 Use Unicode in Your Application; 14.3 Prevent I18N Buffer Overruns; 14.4 Validate I18N; 14.5 Character Set Conversion Issues; 14.6 Use MultiByteToWideChar with MB_PRECOMPOSED and MB_ERR_INVALID_CHARS; 14.7 Use WideCharToMultiByte with WC_NO_BEST_FIT_CHARS; 14.8 Comparison and Sorting; 14.9 Unicode Character Properties; 14.10 Normalization; 14.11 Summary; Part III: Even More Secure Coding Techniques; Chapter 15: Socket Security; 15.1 Avoiding Server Hijacking; 15.2 TCP Window Attacks; 15.3 Choosing Server Interfaces; 15.4 Accepting Connections; 15.5 Writing Firewall-Friendly Applications; 15.6 Spoofing and Host-Based and Port-Based Trust; 15.7 IPv6 Is Coming!; 15.8 Summary; Chapter 16: Securing RPC, ActiveX Controls, and DCOM; 16.1 An RPC Primer; 16.2 Secure RPC Best Practices; 16.3 Secure DCOM Best Practices; 16.4 An ActiveX Primer; 16.5 Secure ActiveX Best Practices; 16.6 Summary; Chapter 17: Protecting Against Denial of Service Attacks; 17.1 Application Failure Attacks; 17.2 CPU Starvation Attacks; 17.3 Memory Starvation Attacks; 17.4 Resource Starvation Attacks; 17.5 Network Bandwidth Attacks; 17.6 Summary; Chapter 18: Writing Secure .NET Code; 18.1 Code Access Security: In Pictures; 18.2 FxCop: A "Must-Have" Tool; 18.3 Assemblies Should Be Strong-Named; 18.4 Specify Assembly Permission Requirements; 18.5 Overzealous Use of Assert; 18.6 Further Information Regarding Demand and Assert; 18.7 Keep the Assertion Window Small; 18.8 Demands and Link Demands; 18.9 Use SuppressUnmanagedCodeSecurityAttribute with Caution; 18.10 Remoting Demands; 18.11 Limit Who Uses Your Code; 18.12 No Sensitive Data in XML or Configuration Files; 18.13 Review Assemblies That Allow Partial Trust; 18.14 Check Managed Wrappers to Unmanaged Code for Correctness; 18.15 Issues with Delegates; 18.16 Issues with Serialization; 18.17 The Role of Isolated Storage; 18.18 Disable Tracing and Debugging Before Deploying ASP.NET Applications; 18.19 Do Not Issue Verbose Error Information Remotely; 18.20 Deserializing Data from Untrusted Sources; 18.21 Dont Tell the Attacker Too Much When You Fail; 18.22 Summary; Part IV: Special Topics; Chapter 19: Security Testing; 19.1 The Role of the Security Tester; 19.2 Security Testing Is Different; 19.3 Building Security Test Plans from a Threat Model; 19.4 Testing Clients with Rogue Servers; 19.5 Should a User See or Modify That Data?; 19.6 Testing with Security Templates; 19.7 When You Find a Bug, Youre Not Done!; 19.8 Test Code Should Be of Great Quality; 19.9 Test the End-to-End Solution; 19.10 Determining Attack Surface; 19.11 Summary; Chapter 20: Performing a Security Code Review; 20.1 Dealing with Large Applications; 20.2 A Multiple-Pass Approach; 20.3 Low-Hanging Fruit; 20.4 Integer Overflows; 20.5 Checking Returns; 20.6 Perform an Extra Review of Pointer Code; 20.7 Never Trust the Data; 20.8 Summary; Chapter 21: Secure Software Installation; 21.1 Principle of Least Privilege; 21.2 Clean Up After Yourself!; 21.3 Using the Security Configuration Editor; 21.4 Low-Level Security APIs; 21.5 Summary; Chapter 22: Building Privacy into Your Application; 22.1 Malicious vs. Annoying Invasions of Privacy; 22.2 Major Privacy Legislation; 22.3 Privacy vs. Security; 22.4 Building a Privacy Infrastructure; 22.5 Designing Privacy-Aware Applications; 22.6 Summary; Chapter 23: General Good Practices; 23.1 Dont Tell the Attacker Anything; 23.2 Service Best Practices; 23.3 Dont Leak Information in Banner Strings; 23.4 Be Careful Changing Error Messages in Fixes; 23.5 Double-Check Your Error Paths; 23.6 Keep It Turned Off!; 23.7 Kernel-Mode Mistakes; 23.8 Add Security Comments to Code; 23.9 Leverage the Operating System; 23.10 Dont Rely on Users Making Good Decisions; 23.11 Calling CreateProcess Securely; 23.12 Dont Create Shared/Writable Segments; 23.13 Using Impersonation Functions Correctly; 23.14 Dont Write User Files to \Program Files; 23.15 Dont Write User Data to HKLM; 23.16 Dont Open Objects for FULL_CONTROL or ALL_ACCESS; 23.17 Object Creation Mistakes; 23.18 Care and Feeding of CreateFile; 23.19 Creating Temporary Files Securely; 23.20 Implications of Setup Programs and EFS; 23.21 File System Reparse Point Issues; 23.22 Client-Side Security Is an Oxymoron; 23.23 Samples Are Templates; 23.24 Dogfood Your Stuff!; 23.25 You Owe It to Your Users If
; 23.26 Determining Access Based on an Administrator SID; 23.27 Allow Long Passwords; 23.28 Be Careful with _alloca; 23.29 Dont Embed Corporate Names; 23.30 Move Strings to a Resource DLL; 23.31 Application Logging; 23.32 Migrate Dangerous C/C++ to Managed Code; Chapter 24: Writing Security Documentation and Error Messages; 24.1 Security Issues in Documentation; 24.2 Security Issues in Error Messages; 24.3 A Typical Security Message; 24.4 Information Disclosure Issues; 24.5 A Note When Reviewing Product Specifications; 24.6 Security Usability; 24.7 Summary; Part V: Appendixes; Appendix A: Dangerous APIs; APIs with Buffer Overrun Issues; APIs with Name-Squatting Issues; APIs with Trojaning Issues; Windows Styles and Control Types; Impersonation APIs; APIs with Denial of Service Issues; Networking API Issues; Miscellaneous APIs; Appendix B: Ridiculous Excuses Weve Heard; No one will do that!; Why would anyone do that?; Weve never been attacked.; Were securewe use cryptography.; Were securewe use ACLs.; Were securewe use a firewall.; Weve reviewed the code, and there are no security bugs.; We know its the default, but the administrator can turn it off.; If we dont run as administrator, stuff breaks.; But well slip the schedule!; Its not exploitable!; But thats the way weve always done it.; If only we had better tools
.; Appendix C: A Designers Security Checklist; Appendix D: A Developers Security Checklist; General; Web and Database-Specific; RPC; ActiveX, COM, and DCOM; Crypto and Secret Management; Managed Code; Appendix E: A Testers Security Checklist; A Final Thought; Appendix F: Annotated Bibliography; About the Author; |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.