Our Vision

To give customers the most compelling IT Support experience possible.

Our Mission

Our mission is simple: make technology an asset for your business not a problem.

Our Values

We strive to make technology integrate seamlessly with your business so your business can grow. As your technology partner, when your business grows ours will grow with you, therefore, we will work hand in hand with you to support your growth.

Our Values

We develop relationship that makes a positive difference in our customers Business.

Our Values

We exibit a strong will to win in the marketplace and in every aspect of our Business

Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

DNS Logs Anomaly Hunting Checklist for Security and SOC Analyst

 

DNS Logs Anomaly Hunting Checklist for SOC Analyst

 

 


Check for the hosts with a high volume of uncommon record types (TXT, NULL, CNAME, etc.)

 

• Command and control channels may utilize specific DNS records such as ( TXT and CNAME requests ) to execute malware.

 

• Explore Top Level Domains, TLDs (.xyz, .me, .biz, etc ), and TLDs for geographical regions in which your organization does not regularly operate.

 

• The proliferation of TLDs has made it easier for attackers to continually add new domains to their infrastructure to evade threat intel lists, as well as register doppelganger domains for common websites.

 

• Inbound/ Outbound Requests for TLDs of geographical regions outside of your organization’s point of presence should be considered suspicious and reviewed, especially regions synonymous with cybercrime and anonymization.

 

• Aggregate and Filter on DNS application logs with the response code NXDOMAIN (domain does not exist) to review hosts seen with a high volume of DNS resolution failures.

 

• There are many benign reasons for failed DNS queries; however, the abnormal volume can be a strong indicator of possible threat activity. For example, malware utilizing Domain generation algorithms ( DGAs ) will cycle through multiple generated domains until a valid reply is received. Since most of the domains requested will not exist, it will generate a high volume of NXDOMAIN responses. In addition, abnormal NXDOMAIN volume could highlight hosts requesting malicious domains that are no longer active.

 

• Look for hosts with high DNS request volume for multiple subdomains of a single parent domain.

 

• A common method of communicating data is by including it in the query string itself in place of the subdomain (commonly encoded using Base64). Identifying requests of multiple suspicious subdomains for a specific domain could help to highlight this method of communication.

 

• Identify suspicious requests by reviewing queries of domains that are abnormally long, or domains with a high level of entropy.

 

• Hunting abnormal long queries with a high amount could help identify encoded data hidden in query strings as well as evidence of DGA domains.

 

• Review endpoints process names for any unusually named processes or processes that are not regularly seen generating logon requests.

 

• Attackers can simply register new domains to evade detection by threat intel lists. Identifying newly registered domains could help to easily identify suspicious activity.

 

• DNS fluxing is a technique used by attackers to hide an actual phishing or malware domain behind constantly changing compromised hosts (IP) which are acting as proxies. To accomplish this, the Time to Live (TTL) for DNS is set very low (close to 5 min) so that the changes made in DNS will reflect quickly over the internet. Because it is constantly changing, this makes it hard to identify, and take down the actual source.DNS query for a domain, having a TTL less than 5-10 mins, should be one way to hunt. Then getting different IP addresses for the same domain is also a way to hunt.

 

• Allowed Traffic on Port 53 Inbound Transition Control Protocol (TCP), zone transfer and should only be allowed between primary and secondary DNS servers. If zone transfer happens with an external IP/Domain which is considered as a high alert.

 

• DNS Should Not Query Unusual Destinations, this often indicates the potentially malicious traffic.

Kerberoasting Attack and Detection

Kerberoasting 

is a common attack used by malicious actors once access is gained to a organization's internal network and a domain account is compromised. Kerberoasting allows an attacker to elevate their privileges by gaining access to passwords for service accounts on the domain.



 

 

Key Points

• Using Kerberoasting  attacker extracts service account credential hashes from Active Directory for offline cracking by exploiting a combination of weak encryption and poor service account password.  

  • Kerberoasting is effective because an attacker does not require domain administrator credentials to pull off this attack and can extract service account credential hashes without sending packets to the target.

 

Detecting Kerbaroasting:

  • Event ID: 4768 (Kerberos TGS Request) The Account Domain field is DOMAIN FQDN when it should be DOMAIN.
  • Event ID “4769” with the vulnerable encryption RC4 “0x17” and “0x18” types in Kerberoasting and ticket option 0x40810000.

 

Elements of a Kerberoasting Attack

 

Here is how a Kerberoasting attack works in practice:

 

  • To begin with, an attacker compromises the account of a domain user. The user need not have elevated or “administrator” privileges. The attacker authenticates to the domain.
 
  • When the malicious  user is authenticated, they receive a ticket granting ticket (TGT) from the Kerberos key distribution center (KDC) that is signed by its KRBTGT service account in Active Directory.
 
  • Next, the malicious actor requests a service ticket for the service they wish to compromise. The domain controller will retrieve the permissions out of the Active Directory database and create a TGS ticket, encrypting it with the service’s password. As a result, only the service and the domain controller are capable of decrypting the ticket since those are the only two entities who share the secret.
 
  • The domain controller provides the user with the service ticket that is then presented to the service, which will decrypt it and determine whether the user has been granted permission to access the service. At this point, an attacker may extract the ticket from system memory, and crack it offline.
 
  • For password cracking, tools such as Impacket, PowerSploit and Empire contain features that automate the process: requesting service tickets and returning crackable ticket hashes in formats suitable for submission to cracking tools such as John the Ripper and Hashcat, which will pry plaintext credentials from vulnerable hashes.
 
 

 

 

Finding Golden and Silver Tickets

 

Purpose: Identify suspicious TGT (Golden) and TGS (Silver) tickets by comparing the MaxTicketAge from the domain policy to the difference in the StartTime and EndTime of the cached authentication ticket.

Data Required : Remote Access to collect susicious tickets OR

Schedule task to write possible bad tickets to application event log for log/SIEM review

Collection Considerations : Consider running local scripts and collecting the application event log rather than a scan to reduce noise See here

Analysis Techniques:Comparative time analysis of domain policy vs cached tickets

 

Identify suspicious TGT (Golden) and TGS (Silver) tickets  

 

  • Event ID: 4624 (Account Logon)
  • The Account Domain field is DOMAIN FQDN when it should be DOMAIN.
  • Event ID: 4672 (Admin Logon)
  • Account Domain is blank & should be DOMAIN.
  • Event ID: 4768 (Kerberos TGS Request)
  • The Account Domain field is DOMAIN FQDN when it should be DOMAIN.
  • The Account Domain field is blank when it should be DOMAIN
  •  The Account Domain field is DOMAIN FQDN when it should be DOMAIN.
  •  Account Name is a different account from the Security ID.

 

 

BloodHound

  • BloodHound is an Active Directory (AD) reconnaissance tool.
  • BloodHound outputs results as JSON files
  • BloodHound can collect information about the following objects (users, computers, groups, gpos)
  • BloodHound can archive collected a ZIP file
  • Hunt for Suspicious Process execution via Services.exe
  • Hunt for Suspicious Process Injection

Hacking , ATT&CK phase , kill chain and incident response phases

There are some common steps used by industry and most commons in Cyber field are listed below.

 HACKING Methodology (Steps) 

Footprinting (whois,nslookup) » 

Scanning (Nmap,fping) » 

Enumeration (dumpACL, showmount, Iegion, rpcinfo » 

Gaining Access(Tcpdump) »

Escalating Privilege(John the ripper, getadmin) »

Pilfering (Rhosts. userdata, configtile. registry) » 

Covering Tracks (zap, rootkits) »

Creating Backdoors (corn, at, startup folder, keylogger, rdp) »

Denial Of Service (synk4, ping Of death). 

 

MITRE ATT&CK:

Reconnaissance» 

Resource Development » 

Initial Access» Execution »

 Persistence »

 Privilege Escalation » 

Defense Evasion» 

Credential Access » 

Discovery »

 Lateral Movement »

 Collection »

Command and Control »

 Exfiltration»

 Impact.

 

CYBER KILL CHAN: 

Reconnaissance» 

Weaponization»

Delivery » 

Exploitation »

Installation »

Command and Control » 

Action and Objective .

 

Incident Response: 

Identify »  Protect »  Detect »  Respond»  Recover. 

SANS Incident Response:

 Preparation »  Identification»  Containment »  Eradication »  Recovery »  Lesson Learned

Web shells Detectting and Hardening servers against webshell


web shells and its Challenges in detecting 


Web shells can be built using any of several languages that are popular with web applications. Within each language, there are several means of executing arbitrary commands and there are multiple means for arbitrary attacker input. Attackers can also hide instructions in the user agent string or any of the parameters that get passed during a web server/client exchange.
 
When analyzing script, it is important to leverage contextual clues. For example, a scheduled task called “Update Google” that downloads and runs code from a suspicious website should be inspected more closely.

With web shells, analyzing context can be a challenge because the context is not clear until the shell is used. In the following code, the most useful clues are “system” and “cat /etc/passwd”, but they do not appear until the attacker interacts with the web shell:

Another challenge in detecting web shells is uncovering intent. A harmless-seeming script can be malicious depending on intent. But when attackers can upload arbitrary input files in the web directory, then they can upload a full-featured web shell that allows arbitrary code execution—which some very simple web shells do.

These file-upload web shells are simple, lightweight, and easily overlooked because they cannot execute attacker commands on their own. Instead, they can only upload files, such as full-featured web shells, onto web servers. Because of their simplicity, they are difficult to detect and can be dismissed as benign, and so they are often used by attackers for persistence or for early stages of exploitation.

Finally, attackers are known to hide web shells in non-executable file formats, such as media files. Web servers configured to execute server-side code create additional challenges for detecting web shells, because on a web server, a media file is scanned for server-side execution instructions. Attackers can hide web shell scripts within a photo and upload it to a web server. When this file is loaded and analyzed on a workstation, the photo is harmless. But when a web browser asks a server for this file, malicious code executes server side.

These challenges in detecting web shells contribute to their increasing popularity as an attack tool. We constantly monitor how these evasive threats are utilized in cyber attacks, and we continue to improve protections


Web shell: Finding Web Shells

Purpose: Identify web shells (stand-alone|injected)

Data Required : Web server logs (apache, IIS, etc.)

Collection Considerations : Collect from all webservers, and ensure that parameters are collected.

POST data should be collected.

• For apache consider using mod_security or mod_dumpio

• For IIS use Failed Request Tracing / Custom Logging

Analysis Techniques:

Look for parameters passed to image files (e.g., /bad.png?zz=ls

 

Web logs things to notice

    • User-Agent is rare

    • User-Agent is new

    • Domain is rare

    • Domain is new

    • High frequency of http connections

    • URI is same

    • URI varies but length is constant.

    • Domain varies but length is constant

    • Missing referrer

    • Missing or same referrer to multiple uri’s on single dest.

 

 

Endpoint detection strategies:

• Look for creation of processes whose parent is the webserver (e.g., apache, w3wp.exe); these will come from functions like:

○ PHP functions like exec(), shell_exec(), etc.

○ asp(.net) functions like eval(), bind(), etc.)

• Looking for file additions or file changes (if you have a change management process and schedule to easily differentiate 'known good') -- (using something like inotify on linux (or FileSystemWatcher in .NET), to monitor the webroot folder(s) recursively)

 

Other Notable things:

IIS instance (w3wp.exe) running commands like ‘net’, ‘whoami’, ‘dir’, ‘cmd.exe’, or ‘query’, to name a few, is typically a strong early indicator of web shell activity.

 

Look for suspicious process that IIS worker process (w3wp.exe), Apache HTTP server processes (httpd.exe, visualsvnserver.exe), etc. do not typically initiate (e.g., cmd.exe and powershell.exe)

 

Look for suspicious web shell execution, this can identify processes that are associated with remote execution and reconnaissance activity (example: “arp”, “certutil”, “cmd”, “echo”, “ipconfig”, “gpresult”, “hostname”, “net”, “netstat”, “nltest”, “nslookup”, “ping”, “powershell”, “psexec”, “qwinsta”, “route”, “systeminfo”, “tasklist”, “wget”, “whoami”, “wmic”, etc.)

 

lolbas:

    - rundll32.exe

    - dllhost.exe

    tools:

    - net.exe

    - powershell.exe

    - ipconfig.exe

    - CobaltStrike

    - BloodHound

    - nslookup.exe

 

execution:

        - "T1055.012 - Process Injection: Process Hollowing"

    - behavior: RUNDLL32 created ~20 instances of DLLHOST without command-line arguments.

      id: 1669ecb0-3a8a-4858-9efd-23e5c01ad643

      type: Process Created

      cmdLine:

      - C:\\Windows\\System32\\dllhost.exe

      process: C:\\Windows\\System32\\dllhost.exe

      parentProcess: C:\\Windows\\System32\\rundll32.exe

 

Attackers need to execute tools. Look at Windows Event ID's 4688/592. Stack and look for outliers. Group by execution time and user."

 

Hardening servers against web shells

A single web shell allowing attackers to remotely run commands on a server can have far-reaching consequences. With script-based malware, however, everything eventually funnels to a few natural chokepoints, such as cmd.exe, powershell.exe, and cscript.exe. As with most attack vectors, prevention is critical.

Organizations can harden systems against web shell attacks by taking these preventive steps:

  • Identify and remediate vulnerabilities or misconfigurations in web applications and web servers. Use Threat and Vulnerability Management to discover and fix these weaknesses. Deploy the latest security updates as soon as they become available.
 
  • Implement proper segmentation of your perimeter network, such that a compromised web server does not lead to the compromise of the enterprise network.
 
  • Enable antivirus protection on web servers. Turn on cloud-delivered protection to get the latest defenses against new and emerging threats. Users should only be able to upload files in directories that can be scanned by antivirus and configured to not allow server-side scripting or execution.
 
  • Audit and review logs from web servers frequently. Be aware of all systems you expose directly to the internet.
 
  • Utilize the Windows Defender Firewall, intrusion prevention devices, and your network firewall to prevent command-and-control server communication among endpoints whenever possible, limiting lateral movement, as well as other attack activities.
 
  • Check your perimeter firewall and proxy to restrict unnecessary access to services, including access to services through non-standard ports.
 
  • Practice good credential hygiene. Limit the use of accounts with local or domain admin level privileges.

Social Engineering Red flags and Email investigation

 

Social Engineering -

 A single individual or groups of people attempting to gain access to your systems by utilizing the following methods.

 

Relies on interaction with humans, tricked into handing over credentials - humans are the weakest link therefore they try Deceptive techniques into breaking in.

 


 

 Type of Social engineering Attacks :

  • Phishing - malicious email - sends a link
  • Spear-phishing - targets individuals or specific groups
  • Email spoofing - masquerading as someone else - appear as someone you think you know.
  • Baiting - entice victim to do something, leave a usb lying around.
  • Tailgating - gain access by following an employee through a door/gate.


Indicator or Red Flags to look for investigation:

 


 


Email Sphere phishing: In this email fraud the perpetrator will ask for confidential and sensitive information. This type of attack resembles with e-mail spoofing fraud but in here in almost all cases the sender is someone trustworthy with an authoritative position in the organization.

 

Business email compromise is when criminals use email to abuse trust in business processes to scam organizations out of money or goods.

 

The Email forensic investigator can use several header fields to trace the email but it can be broadly categorized into the following area of interest the investigator should look into:

Sender's SMTP Server (OUTGOING Mail Server) >>

 Encrypted mail header >> 

Typical To, From, Subject, and Date Lines >> 

Mail transfer email client information >>

Various X-header information added by different SMTP server and email clients during the whole email sending process.

 

CI/CD Pipelines and Automation

Modern web applications are built using modern continuous integration and deployment processes. 


This means that you run tests specific to whatever environment you are pushing to whether that's DEV, STAGING or PROD.



Control     Name          Priority          
3.1     CI/CD Pipeline     1    

Description: Implement a CI/CD pipeline  

Difficulty:      Medium     


Control     Name                           Priority     
3.2     Application Environments      2     

Description: Create separate environments for dev, staging and prod, and treat each as independent with its own data, testing and requirements     

Difficulty:    Medium   

Control     Name                               Priority            
3.3     Application Data Separation      3     

Description: Make sure that dev and test environments are not using the same data as production. If the use of live data is required then make sure that data is anonymized. 

Difficulty:   Difficult     

Control     Name                       Priority         
3.4     CI/CD Administration     3    

Description: Create and enforce user or team roles so that only the appropriate people can change or disable tests and deployment requirements

Difficulty:  Medium  

Control     Name             Priority           
3.5     Credential Store     1     

Description: Create a secure encrypted place to store senstive credentials like passwords, API keys, etc.   

 Difficulty: Medium    

Control     Name                                                       Priority           
3.6     Centralized Software Composition Analysis     1 

Description:  Scan source code for vulnerable libraries and open source software from within a CD stage   

Difficulty: Easy   

Control     Name                                     Priority  
3.7     Centralized Static Code Analysis     2    

Description: Scan source code for vulnerabilities in the source code itself from within a CD stage     

Difficulty:  Easy  

Control     Name                                     Priority    
3.8     Centralized Sensitive Data Analysis     2    

Description: Scan source code for secrets, credentials, API keys and similar from within a CD stage    

Difficulty: Easy     

Control     Name                                                                  Priority
3.9    
Dynamic Application Security Testing -DAST             3                        

Description:Scan running application for vulnerabilities

Azure Well Architected Security Review Checklist

 Here We have compiled for you a checklist for Azure Security.


Priority: High Weight: 90

Item No 1: Classify your data at rest and use encryption
Item No 2: Implement Conditional Access Policies

Priority: High Weight: 70
Item No 3: Conduct periodic access reviews for the workload
Item No 4: Use only secure hash algorithms (SHA-2 family)
Item No 5: Discover and remediate common risks to improve Secure Score in Azure Security Center
Item No 6: Define a set of Azure Policies which enforce organizational standards and are aligned with the governance team
Item No 7: Use tools like Azure Disk Encryption, BitLocker or DM-Crypt to encrypt virtual disks
Item No 8: Deprecate legacy network security controls
Item No 9: Integrate network logs into a Security Information and Event Management (SIEM)
Item No 10: Data in transit should be encrypted at all points to ensure data integrity
Item No 11: Establish a designated group responsible for central network management
Item No 12: Build a security containment strategy
Item No 13: Evolve security beyond network controls
Item No 14: Periodically perform external and/or internal workload security audits
Item No 15: Establish lifecycle management policy for critical accounts
Item No 16: Standardize on modern authentication protocols

Priority: Medium Weight: 60
Item No 17: Configure web apps to reuse authentication tokens securely and handle them like other credentials
Item No 18: Ensure security team has Security Reader or equivalent to support all cloud resources in their purview
Item No 19: Synchronize on-premises directory with Azure AD
Item No 20: Implement identity-based storage access controls
Item No 21: Design virtual networks for growth
Item No 22: Use standard and recommended encryption algorithms
Item No 23: Assign permissions based on management or resource groups
Item No 24: Add planning, testing, and validation rigor to the use of the root management group

Priority: Medium Weight: 50

Item No 25: Use managed identity providers to authenticate to this workload
Item No 26: Enforce password-less or Multi-factor Authentication (MFA)
Item No 27: Continuously assess and monitor compliance
Item No 28: Use identity services instead of cryptographic keys when available
Item No 29: Establish a designated point of contact to receive Azure incident notifications from Microsoft
Item No 30: Establish process and tools to manage privileged access with just-in-time capabilities
Item No 31: Implement role-based access control for application infrastructure

Priority: Medium Weight: 40
Item No 32: Implement resource locks to protect critical infrastructure.


 

Mimikaz


What is Mimikatz?

If you’re into penetration testing and windows red teaming then you might have probably heard of mimikatz, but in case you’re wondering or have heard of the tool but don’t know what it does, let’s see what is mimikatz.

Written in C-language, Mimikatz is a very powerful post-exploitation tool and as described by CrowdStrike CTO and Co-Founder, “The AK-47 of Cyber Attacks.” 

Some even claim mimikatz to be a Swiss Army Knife of Windows Credentials. Benjamin Delpy, who is the developer of this tool, claims that he created this tool to play with Windows Security. He maintains his own GitHub repository where he has provided the source code for the tool and updates it on a regular basis.

What can be done using Mimikatz?

Although known widely for credential dumping, this is not the only thing that it can do. 

Mimikatz is also capable of assisting in lateral movements and privilege escalations. Attacks like Pass-the-Hash, Pass-the-Ticket, Over-Pass-the-Hash, Kerberoasting etc. can also be achieved with Mimikatz.

Mimikatz Attack Capabilities

Mimikatz has numerous modules that let attackers perform a variety of tasks on the target endpoint. Some of the more important attacks facilitated by the platform are:

  • Pass-the-Hash—obtains an NTLM hash used by Windows to deliver passwords. This allows attackers to reuse the password without having to crack the hash.

  • Pass-the-Ticket—Mimikatz was famously used to break the Kerberos protocol. It can obtain a Kerberos “ticket” for a user account and use it to login as that user on another computer.

  • Kerberos Golden Ticket—obtains the ticket for the hidden root account (KRBTGT) that encrypts all authentication tickets, granting domain admin access for any computer on the network.

  • Kerberos Silver Ticket—exploits Windows functionality that grants a user a ticket to access multiple services on the network (via the Ticket Granting Server or TGS). The Kerberos protocol may not check the TGS key, allowing attackers to reuse the key and impersonate the user on the network.

  • Pass the Key—obtains a unique key used by a user to authenticate to a domain controller. The attacker can reuse this key to impersonate the user.

Anatomy of a Mimikatz Attack:

Mimikatz abuses and exploits the Single Sign-On functionality of Windows Authentication that allows the user to authenticate himself only once in order to use various Windows services. 

After a user logs into Windows, a set of credentials is generated and stored in the Local Security Authority Subsystem Service (LSASS) in the memory. As the LSASS is loaded in memory, when invoked mimikatz loads its dynamic link library (dll) into the library from where it can extract the credential hashes and dumps them onto the attacking system, and might even give us cleartext passwords.






Malware analysis, Tools and technique

 
                      
What is Malware Analysis?
Malware analysis is a process analyzing the samples of malware family such as Trojan, virus, rootkits, ransomware, spyware in an isolated environment to understanding the infection, type, purpose, functionality by applying the various methods based on its behavior to understanding the motivation and applying the appropriate mitigation by creating rules and signature to prevent the users.
 
Malware analysis plays an essential role in avoiding and understanding cyber attacks. When incident response teams are brought into an an incident involving malware, the team will typically gather and analyze one or more samples in order to better understand the attacker’s capabilities and to help guide their investigation.
 
Type of Malwares:

Type

What It Does

Real-World Example

Ransomware

disables victim's access to data until ransom is paid

RYUK

Fileless Malware

makes changes to files that are native to the OS

Astaroth

Spyware

collects user activity data without their knowledge

DarkHotel

Adware

serves unwanted advertisements

Fireball

Trojans

disguises itself as desirable code

Emotet

Worms

spreads through a network by replicating itself

Stuxnet

Rootkits

gives hackers remote control of a victim's device

Zacinlo

Keyloggers

monitors users' keystrokes

Olympic Vision

Bots

launches a broad flood of attacks

Echobot

Mobile Malware

infects mobile devices

Triada

 
How to perform Malware Analysis 
There are various types of analysis and related malware analysis tools that mainly used to break down the malware.
  • Static Malware Analysis
  • Dynamic Malware Analysis
  • Memory Forensics
  • Web Domain Analysis
  • Network interactions Analysis etc
Static Malware Analysis?
This procedure includes extraction and examination of different binary components and static behavioral inductions of an executable, for example, API headers, Referred DLLs, PE areas and all the more such assets without executing the samples.
Any deviation from the normal outcomes are recorded in the static investigation comes about and the decision given likewise. Static analysis is done without executing the malware whereas dynamic analysis was carried by executing the malware in a controlled environment.
 
1.Disassembly – Programs can be ported to new computer platforms, by compiling the source code in a different environment.
 
2.File Fingerprinting – network data loss prevention solutions for identifying and tracking data across a network
 
3.Virus Scanning -Virus scanning tools and instructions for malware & virus removal. Remove malware, viruses, spyware and other threats. ex: VirusTotal, Payload Security
 
4.Analyzing memory artifacts – During the time spent breaking down memory ancient rarities like[RAM dump, pagefile.sys, hiberfile.sys] the inspector can begin Identification of Rogue Process
 
5.Packer Detection – Packer Detection used to Detect packers, cryptors, Compilers, Packers Scrambler, Joiners, Installers.+ New Symbols+.
Static Malware analysis Tools
Ghidra and IDA : IDA Pro has been the go to SRE (Software Reverse Engineering) Suite for many years until Ghidra’s release in 2019. Since then Ghidra’s popularity has grown exponentially due to it being a free open-source tool that was developed and is still maintained by the NSA
 
Websites like : Hybrid-analysis, Virustotal.com
 
Other tools : Md5deep, PEiD, Exeinfo PE, RDG Packer,D4dot,PEview, WinDbg,Hxd
What is Dynamic Malware Analysis?
The dynamic analysis should always be an analyst’s first approach to discovering malware functionality. in dynamic analysis, will be building a virtual machine that will be used as a place to do malware analysis.
 
In addition, malware will be analyzed using malware sandbox and monitoring process of malware and analysis packets data made by malware.
Dynamic analysis tools: 
Some common Dynamic analysis are Wireshark, Netcat, Procmon, Process Explorer, Process Monitor,Regshot, ApateDNS Procmon, Procdot, Regshot, , Process Hacker, PeStudio, Fiddler, Wireshark, Cuckoo Sand box, Ghidra.
After you have gather some data its time for analysis:

  • Upload hash data/or file to site such virus total /anyrun / hybrid analysis to get info
  • If IP or domain name available, check DB of known Adversaries.
  • Use packet capture and traffic analysis, if external connection suspected by malware
  • Obtain the malicious file analyze in sandbox to identify indicators.
  • Use 'log s from SIEM and EDR to identify other infected endpoint.
  • Take the identified endpoint of the network, do not power off
  • Use data gathered to setup blocks for future attacks.

Twitter Facebook Favorites More