CHAPTER 1 INTRODUCTION RSA is a network based firm and one of the pioneer in security domain which is a sector of Information Technology

CHAPTER 1
INTRODUCTION
RSA is a network based firm and one of the pioneer in security domain which is a sector of Information Technology. Security plays a crucial role in every field to secure any data that is private and confidential. The confidentiality should be maintained over fraudulent activities that would try to break the security system of the company. The security aspects not only include protection from fraudulent activities but also from those malwares that would try to attack the security software and make them miss behave and leak the confidential data to the attacker. RSA is working on four main line products whose focus is on identifying and avoiding different ways of security compromise.

First product is RSA “Netwitness Endpoints (NWE)”. A very intelligently planned product that detects threats that might generally happen by malware and attacks. NWE rapidly detects and alerts the user for any potential threat on devices be it cloud or across the virtual enterprise platforms. It is proactive in identifying the possibility of occurrence of malware through behavior analysis. It notifies the customer or the analyst when there is high risk to the machine based on its risk score and customer can take the action accordingly.
Another major product of RSA is RSA Secure Identity and Authentication, shortly SecureId. As the name highlights, the product revolves around Identifying and authenticating its end users. It assimilates biometrics that includes finger print and iris detection. The product has Two factor authentication called 2FA which will authenticate the user based on dynamically generated 8 digit pin. User also will have a secret key only after entering which the pin will be generated. Two kinds of token are available to generate pin. One is hard token (a physical device used to generate a token) and another is soft token (software token generator installed in any machine). These tokens are available both for computer and mobile environments. Once used enters his secret key, the pin will be generated and using that pin one can get access to private network. Tokens also have session timeout after which a new pin will be generated expiring the old one. It is called 2FA because the secret key that user enters is first set of authentication, and using the pin generated to access network is the second set of authentication, hence the name.

Third product is RSA Archer Suite. This product respond to the risks proactively using the data-driven insights and a streamlined, fast time-to-value approach. This product supports and empowers organizations of all sizes to respond to risks. This is achieved by fixed price deployment and implementation of services that let the customer stand up with their respective environment. The product not only focus on quick action against the risk, but also has the capability for issue management, business impact analysis, risk catalog and third party catalog. The business impact analysis determines the effect of proactive action towards risk to mitigate the risk. RSA Archer helps customer reduce risk by defining and enforcing accountability for risk and compliance issues. Enables collaboration on risk issues across business lines and organizational boundaries. Drives efficiencies by automating processes and improves visibility by consolidating data and enabling risk analytics across the organization.

Fourth product is RSA Fraud and Risk Intelligence Suite (FRI). This product of RSA manages fraud and digital risk across multiple channel environments without impacting customers or transactions. It helps organization to act at the speed of fraud. The product basically protects the assets of customers and their brand values from the fraudulent act. It secures the transaction across the web and the mobile devices too. New and emerging threats are being tracked by fraud management strategies. Thus the product comes with many benefits such as significant reductions in online e-commerce and mobile fraud and related fraud losses. Helps to mitigate phishing malware, rogue mobile applications and other cybercrime threats in real time. Detects cyber threats in web as well as mobile applications with advances behavioral analytics. Delivers regular reports and updates that offer deep intelligence about identity theft, emerging threats and fraud threats. Protects 3D Secure e-commerce transactions without impacting users.

1.1 Introduction to ECAT
This product is fledged with multiple ways to detect the malware which are known and unknown that is new malwares. The product is an evolved Security Information and Event Management (SIEM), a comprehensive threat detection and response solution that leverages endpoint security data sources to help customer stay on top of today’s sophisticated cyber threats.

The NetWitness Endpoint product justifies the name Endpoint by delivering full visibility on all process, executables, events and behavior on all the endpoints like server, desktops, laptops and virtual machines. It has the Rapid Data collection feature that collects full endpoint inventories and profiles in minutes, with no discernible impact on end-user productivity, which is achieved by an extremely lightweight endpoint agent. The product is also featured for scalability from hundreds to hundreds of thousands of endpoints. All the data storage and most of the analysis occurs on the NetWitness Endpoint Database, which ensures data integrity and drastically reduces endpoint impact. The product feature extends to behavioral based detection with User and Event Behavior Analytics (UEBA) 1. Baselines “normal” endpoint behavior, detects deviations and scores and prioritizes incidents based on potential threat level using UEBA monitoring capabilities and an advanced machine learning algorithm. The product is intelligent enough and automatic in functioning. It collects data and automatically analyzes processes, executables and more on endpoints; records data about every critical action surrounding the unknown item; and communicates with the RSA NetWitness Endpoint server for advanced analysis and threat prioritization.

So, this significance is achieved by NetWitness Endpoint product by a functionality called “Blocking”. The blocking functionality blocks the hash of the executable file be it “.exe”- the executables for Windows operating system, “.rpm” executable for Linux operating system or “.pck” executable for MAC operating system. Once the file is blocked the machines is said to be compromised and is indicated through hike in machine threat count which increases the count based on the behavior of the executable files in different operating systems as mentioned above. Blocking a file also ensures that the file will not be able to further flow in the network and affect other machines connected to same network in which the file is blocked. It is also possible to make user unaware about such a file which is blocked, by quarantining the file and making its existence invisible on the machine. The processes, executables and events are categorized based on their behavior to calculate the score of that particular module. The categories include process events with source file, target file with valid signature while triggering any process events or file events. The algorithm of behavior analysis looks into such properties of either process event or file event or both to calculate the score and prevent the machine from being compromised. This not only helps in prevention but acts as a precaution to the agents that has not yet encountered that module and hence be aware of such module before it is welcomed.

The known malwares are detected with the help of stored hash values of those malwares. Although the reason known malwares are detected in short span of time is because of this advantage, time taken to detect new malwares are also accountable. The former is possible because of the RSA product integration with Third-Party tools that are well build to define the rules and which contains the hashes of the known malwares. This also provides to define developer rules to detect malwares on positive behavior of defined rules. Along with these tools an external component named RSA Live which contains the hashes of the new malwares that are detected using behavioral aspects. These are the components of the NWE product that contribute to find the malwares on the machine as soon as they enter machine. The detection of the malwares that behaves weirdly are identified with the software that runs on the endpoints may it be laptops or monitors. The events on the machine are reported to the server may it be creating process, executing a process, read process or write process and send all these process events to the server. The server looks into the rules of behavioral analysis and if rules turns to be positive then the server block the hash value. Detecting the malwares is not the solution. Of course protecting the machine from that malware increases the value and that is the significance of using the product.

1.2 Introduction to JAZZ
JAZZ product is an improvement to the NetWitness Endpoint (NWE) product in the sense, improved version of NEW. The improvements are focused on the scalability and performance. Growing demands of the customer includes as good performance as possible. Better performance may be gauzed through ease of access-how easily the customer gets access to the product, time of access-how fast is the access provided through server which also defines the efficiency in response time. It provides real-time 2 visibility into user’s network traffic—on premises, in the cloud and across virtual infrastructure. It allows users to detect emerging, targeted and unknown threats as they traverse in the network. It monitors the timing and movement of attackers across the network. It reconstructs the entire network sessions to support forensic investigations. It parses network data at capture time into session-wise metadata and enriches it with threat intelligence and contextual information about the business to detect legitimate, high-risk threats. This parsing is dynamic and creating session wise metadata dramatically accelerates alerting and analysis.
The JAZZ product also uses UEBA for behavioral analysis which is called as App rules. The UEBA is a software that analyzes user activity data from logs, network traffic and endpoints and correlates this data with threat intelligence to identify activities—or behaviors—likely to indicate a malicious presence in your environment. It uses machine learning technology to baseline “normal” behavior and get smarter over time, and applies both static rules and statistical analysis to rapidly and accurately detect suspicious activity. Using such advanced technology and statistical models, UEBA is a force multiplier for security teams struggling to stay on top of today’s advanced, targeted threats. It spots insider threats and external attackers exploiting compromised credentials BEFORE those activities lead to a data breach. The advantages of UEBA in JAZZ leverages user, network and endpoint behavior profiling to identify abnormal user behaviors. It detects abuse and misuse of privileged accounts, brute force attacks, account manipulation and other malicious activities. It requires no customization, rule authoring or ongoing care, tuning, rule creation or adjustment.

The JAZZ product provides accurate threat detection with a scaling of around fifty thousand to lakhs together machines or agents that could rapidly identify the anomalies even with the slightest deviation in user and entity behavior to highlight potential threats.

It protects against both external and insider threats by focusing on compromised credentials, abuse or misuse of privileged used accounts regardless of data source. The product ensures security operations more efficient by reducing time in threat detection, investigation, response and remediation times. It also alleviates the alert fatigue by slashing the number of incidents to investigate from the thousands to low dozens while yielding more accurate alerts, minimizing false positives and eliminating the “noise” stemming from traditional security monitoring systems. JAZZ leverages a unique combination of capabilities to delve deep into the inner workings of endpoints and expose anomalous behaviors. JAZZ techniques include live memory analysis, direct physical disk inspection, network traffic analysis, suspicious user behavior detection, and endpoint state assessment. It makes use of an immensely powerful software called endpoint agent to collect full endpoint inventories and profiles within minutes, with no discernible impact to end-user productivity. This endpoint agent completes data collection so quickly, with all data storage and the bulk of analysis occurring on the RSA NWE JAZZ server to ensure data integrity and drastically reduce resource impact. JAZZ automatically initiate a quick, targeted scan when unknown files, processes and more load on an endpoint, record data about every critical action (e.g., file or registry modifications, network connections) surrounding the unknown item, and communicate with the RSA NetWitness Endpoint server for further analysis.

Overall comparison between NEW and JAZZ justifies a better functionality in the JAZZ because of its micro-services architecture, unlike the NEW that relies on Monolithic architecture. As JAZZ is micro-server architecture this supports different servers that work independently, coordinating with each other towards a common goal, hence making the process of communication between the servers and the Agent faster, and also will manage three times more number of Agents communicating to server when compared to NWE product.

CHAPTER 2
LITERATURE SURVEY
This chapter includes the details about all the related technology which will support building and continuous run of the product. Details about all these relative technologies are briefed out in this chapter.

2.1 Network Security:
Network security is the foundation of our product. The whole product is designed and developed to ensure that all the aspects of network and system security are satisfied. Generally Network Security can be defined as a strategy of an organization in ensuring its network traffic and assets are protected 3. Various hardware and software have been coming into picture to satisfy security needs of an organization. When we look at the definitions of network security, mostly we see that it is defined as an enforcement mechanism that is aiming towards network traffic analysis to hold confidentiality, integrity and availability of information or system. Which means,
Confidentiality: Access to sensitive data should be given to authorize personnel only. It should be protected from unauthorized access.
Availability: System or information should be continuously accessible by the authorized users. There should be no denial of service from system’s side.

Integrity: Keeping track of an information so that it’s original information is not lost and whenever there is a need to modify it, that should be done by authorized person by specific measures only.

Now a days with increase in security demands, the concept of ‘Defense in Depth’ is also gaining its popularity. It is nothing but providing layered way of security. For example, deploying firewalls as first layer of security then intrusion detection system, intrusion prevention system, installing antivirus software etc. play its role when it comes to layered security. Firewalls act as foundation for this layered security. In these layers also different security measures are assorted such as, access control, identification, authentication, malware detection, encryption, file type filtering, URL filtering, and content filtering. These help in keeping the treats away in every security layer.
2.2 RESTful Web Services:
Web services generally server the purpose of human to machine interaction. It provides an object oriented interface for database servers which can be utilized by other servers as well such as mobile applications. Web services make use of technologies like http for human to machine and machine to machine interaction.

REpresentational State Transfer or REST is a web service architecture using which one can design lightweight, easy to maintain and are scalable for varying web requirements. The services that are built on top of REST architecture are called RESTful web services. REST uses http as it’s underlying protocol. In other words REST can be stated as a way to access the resources from servers. For example a client requests the browser for some information, and browser fetches it from the server that is hosting that information. It can be anything, a document, picture, music, video or any information and it acts as a resource. REST provides us different ways to access those resources.

RESTful web service implementation has some key aspects in it that are,
Resources: Resources are foundational units of web services. For example suppose there are information about all the students hosted in the server and in present in th URL http://demo.studentdetail.com now suppose somebody wants to access the information about student 1 then REST url exposed should be http://demo.studentdetail.com/student/1. Through which student detail having roll number 1 can be accessed.

Request Header: Header includes additional details alone with the url. Additional information such as authorization details, specific uri etc. Headers usually define the response type that is required.

Request Body: Suppose a client wants to add some resources to server then it is sent as the part of request body. Usually POST method is used to send the resources to the server. Some data is sent along with the request. Therefore request body includes the resources that the client wants to send to the server along with the request.

Response Body: Whenever client requests for resources, server sends it back to the client. The part of that response is mainly response body. The response body consists of response in XML format. For example, suppose client types http://demo.studentdetail.com/student/1 then server sends the response of students in XML format in the response body.

Response Status codes: These are the numeric codes which are returned along with the response indicating the status of the response. For example 200 is sent when correct response is returned from the server with no complications or 404 is sent when request has no value found at the server side. Sometime 500 is sent when server is down.

REST has different methods in it which is explained below:
POST: This is used when client wants to send some resource to the server.

GET: This is used when client requests for some data from server.

PUT: This is used to update the existing values in the server using RESTful services.

DELETE: This method is used to delete all the existing values from server using RESTful services.

2.3 Docker:
Creating virtual machines in a system is easier but it consumes lot of resources and are time consuming. Also switching between virtual machines is very difficult. It is also hard to ship the components from virtual machines. Docker emerges as the alternative solution for the older virtual machine implementation. It performs operating system level virtualization which gives the user the experience of using the normal operating system options. The virtualization is docker is named as Containerization where each virtual machine acts as a container.

In other words docker can be defined as the tool which is used to packages and runs the containers independent of the platform or operating system. A container packages the system including its dependencies. Docker has different components in it which are explained below,
Docker Image:
In docker everything is an image. For example if there a simple hello world program it is also converted into an image. Docker image is a combination of both file and system parameters.

Command used to create docker images: docker run <program-name>.

Command used to list docker images: “docker image”.

Docker Container:
Containers are nothing but the instance of docker images. Images kept in runnable for will be stored inside docker containers. A container can hold multiple images in it which can be helpful during simulation of a complete project.

Docker Hub:
Docker hub is centralized repository where people from different community upload their docker images. One can download images from different sources from here and also can upload the images. We have to create a personalized account to upload or download images.

Command to pull docker images from hub : “docker pull <imagename>”
Docker Compose:
Docker compose is a configuration file which is used for running multiple containers as one service.

Docker File:
A file which helps user to create docker images of our own. It is a simple text file which has instructions to build our own docker images.

2.4 MongoDB
MongoDB is a NOSQL database which is document oriented. It is used mostly in organizations which have high volume of data storage. Data base consists of collections and indexes. Each collection consists of documents and data in each documents can have different schema then one another.
There are various data models available in MongoDB which helps us to represent the data in different schema through which we can represent complex data in simple format. Also mongodb is highly scalable.

The example below indicates typical storage structure of MongoDB:
{
_id: <object id>
Name : name
Order:
{
orderId: 122
product: productName
quantity: 4
}
}
CHAPTER 3
SYSTEM REQUIREMENTS SPECIFICATIONS
RSA Netwitness Suite is a powerful threat detection system which helps user to identify, locate, prioritize and notify the user for all the kind of threats that were already known as well as unknown. Netwitness suite as a whole comprises of Logs, Packets and Endpoint solutions. As I worked with Netwitness team the Requirements Specification and Architecture explained here will be focused only on Netwitness component.

3.2 ARCHITECTURE
2870518668338Message Queuing Protocol
00Message Queuing Protocol
Figure 1.1 explains the high level architecture diagram of Netwitness Suite. It gives a brief idea about all the components used and how they interact among each other to produce the desired results.

260350268605NW-UI
00NW-UI
2719705243205NGINX Server
00NGINX Server
43891756900
335615239669600274319938938200310896038770600106680011684000
4144645311150Endpoint Server
00Endpoint Server
3243580326390Admin Server
00Admin Server
2118360339725Configuration Server
00Configuration Server
15900407620001036320325120Security Server
00Security Server
492760045212000539305545212000
13862052667000457581026670002599690241300036156902984500
125285513970Mongo DB
00Mongo DB
2047240297815
Figure 1.1 Architecture diagram of JAZZ.

Figure 1.1 Architecture diagram of JAZZ.

Architecture of JAZZ mainly comprises of Servers, Messaging Protocol, Broker, Database, and Agent. The servers here are different microservices which can be either used together to perform as one single system or can be used individually to serve the purpose. The microservices that are used in Jazz are explained in detail below,
Admin Server:
This is one among the microservices of JAZZ. As the name indicates, Admin Server behaves like an administrator. It helps to keep the compatibility between the agent and endpoint server. This micro service helps to be compatible with the agent rpm file every time improvements made to the agent rpm. It also helps to keep the compatibility between agent rpm and nginx server. Accessing different services needs admin authentication which is provided and handled by the admin server. Endpoint server needs to get all the data from the agent in order to check for any malwares. Agent will send all the data to endpoint server through nginx. Admin server checks for the login credentials before any operations at endpoint server happens. But before that admin server checks if Security server has passed the certificate check. If security server has not authenticated the certificates then admin server is not going to allow even if the authentication credentials are going to be correct.

Admin server is also responsible for managing RBAC i.e Role Based Access Control to Netwitness UI. RBAC defines some default permissions to different roles like Admin, Analyst, Malware Analyst, SOC manager, Operators etc. The permission are different for every role and it is the responsibility of admin server to ensure that the roles are accessing only those functionalities that they are permitted to access or in other words the functionalities are greyed out. There can be any number of users created with different access permissions, based on their requirement and utilization of user. But administrator has the permission to delete any user at any point of time as administrator has access to all the functionalities.

Admin server not only looks after the login of the users on the browser side but also looks at the database login. The mongo database are divided into two that takes care of collections that are related to functionality in one database and collections related to configuration in other database. So the admin server ensures both the credentials match and then only gives the access to the database. Thus admin server being one of the micro service supports good performance in
maintaining the role based authentication policies and works efficiently in handling the logged in users with different permission assigned based on their roles.

Config Server:
Different servers have different functionalities and every server needs to be configured accordingly to have them working as desired. Config server is another microservice in the NWE product which handles the configuration part for rest of the servers in order to support their functionalities. The configuration file is a static file which should not be modified as it affects the security settings, network settings etc related to the servers. Suppose anything is missing or modified in the config server then no servers will work fine as primary authentication itself will not happen at the security server hence the admin server blocks it.
The config server 14 is responsible for configuration setting of agent that communicates to the endpoint server to send the data information of the machine on which it is being deployed. The configuration file of the agent server will be created once the agent packager is downloaded from the server. It contains the information about the agent communicating server’s internet protocol address, port number thought which it sends and receives request and response respectively. It also has the information about a feature called auto uninstallation where in the agent gets uninstalled automatically on the date and time selected by the user while generating the agent packager. So the configuration file contains that date and time on which the agent has to get uninstalled by itself. The config file includes information of agent like agent ID, service name, display name, description, driver display name, driver service name, driver description and the type of certificate validation the agent will be providing to the security server for validation. The config file also has agent mode that tells whether the agent monitoring mode enabled or disabled based on the Boolean value for full agent is true.
The agent has a beacon interval to make the server aware of its existence. This beacon interval also determines the state of change for example scan status from idle to starting scan and then to scanning and then back to idle state. So these states are changed after every beacon of the agent to the server. By default the beacon interval will be six hundred seconds, however it can be changed according to the tester for testing purpose. The full agent set to Boolean value true essentially means that the agent should work in full monitoring mode, which means along with the information about the executables it should also send the data information of network events. So with the full agent the process events, file events and the network event are expected by the network to communicate to the server.

Nginx Server:
The nginx server is that microservice whose service is to make the other microservices communicate with each other through its service. Nginx acts as a bridge between the Agent and the Servers. It helps the agent to communicate with the servers through it. It receives the request from the agent and directs it to the endpoint server that has to take certain actions depending on the agent’s information. The admin server sends authentication request to security server which is again received by the nginx server and is directed to the security server for the certificate validation. On successful validation the security server sends acceptance to the admin sever back through the nginx server. The config server gets all the configuration details entered by the user while agent packet generation sent by endpoint server to configuration server through nginx server. Most importantly the orchestration of endpoint server to appear as service under services provided by the admin, is done by communicating between node 0 and node x – nw-ui and endpoint server respectively. This orchestration is necessary to show up the services 15 that endpoint server provides like Data Retention, Schedule Scan, agent Packager. The nginx sever receives their request from the agent and gets authenticated thorough certificate match by the security server and then decides to which server it should be directed to fulfil the request of the agent. Once after the request by the nginx server to the respective server is served by that particular server it is communicated back to the agent by nginx ensuring same request is served for the respective agent only.
Hence all these micro services are brought together and made communicate with each other through nginx microservice which make the product as thin client based architecture. This microservices thus improves the number of agents that can communicate with endpoint serverusing different beacon interval and responds to the request with the respond sent by the security server with increases response time. The performance is increased because the number of agents communicating is increased and the type of information sent by the agents are also increased hence reducing the chances of machine compromise due to any malwares going undetected.

Endpoint Server:
This server is the last and most important microservice of the netwitness endpoint JAZZ. This microservice is the core of the product where the host information is stored, meaning- those hosts information on which the agents are deployed. The Endpoint Server(EPS) 15 has all the information about the machine like the activities done on the machine, the processes running on that machine, the dlls, exes etc running on that machine and even the websites that are browsed on that machine on which the agent is installed. The machine’s scan data, machine’s last scan time, last seen time as active to trace the inactive agents.
The endpoint server offers numerous services to the users related to the agent generation, deployment, maintenance and deletion. The first service is provided by the endpoint server by allowing the user to generate agent packager through nw-ui with server internet protocol address to which it should communicate with and using the http port. It allows user to specify the type of certificate validation that should be done on the server side which is by default certificate thumbprint. The endpoint server also provides the user to have the privilege of “force override” option to give the user a way to override the older agent with the newer agent, reducing the overhead of uninstalling older agent, restarting and installing new agent. The certificate password is entered by the user which is sent to the admin server that validates the password entered before generating the executables – rpm to deploy agent on linux machines, exe to deploy on windows machines and pck to deploy on MAC machines from the agent packager. The endpoint server also provides user to schedule a scan on daily or weekly based on user convenience and exposure of threats. The endpoint server also provides unique service called Data Retention.
This provides the information of the agents that are alive. It retains the information or data related to the agents that constantly beacon the server. If in case any agent has not been beacon since long time say months together, then according to the data retention policy after the threshold number of days since the agent has not beacon the server, the entire data of that particular agent is erased from the database. If in case the agents comes online after the threshold value is crossed then these agents appear as “Zombie” agents in the database whose presence is unnoticeable as there is no data of that agent.

The User Interface of the JAZZ is written in Ember Java Script. The Ember UI is the means of vision for the user to check the hosts on which the agents are installed. It provides user to check out the information about the hosts in user friendly manner. The agents that are active and are able to communicate with the Endpoint Server appear on the UI hosts page and further the user can analyze that particular host’s information. Everything the agent wants to communicate to the servers are in the JSON file format that are passed though nginx to the endpoint server. This endpoint server is attached with the MongoDB as the backend process. All the information about that hosts is actually stored in the Mongo DataBase and some of the information is made available in the UI. As soon as the servers are up and running they are visible in the Mongo DataBase. RoboMongo or Robo 3T is helpful to track this process.

Asynchronous Message Queuing Protocol (AMQP)/ Hyper Text Transfer Protocol Secure (HTTPS) are used as the protocol to communicate the requests between the Nginx server and the other servers (here Endpoint Server) as shown in the figure.

Sl.No Score Name Color Description
0 Clean Green No Machine Score
1-7 Low Yellow One or more modules contributes to machine count indicating minor malware behavior.

8-127 Medium Orange One or more modules contributes to machine count with one step higher level of malware behavior and are good indicators of abnormal activity but might lead to false positives
128-1023 High Red One or more modules contributes to machine count with one more step higher level of malware behavior and are high indicators of compromise.

1024 Critical Black One or more modules with high malware behavior is encountered and needs immediate attention.

Table 1.1 Scores with color indicators and description.

CHAPTER 4
PROBLEM STATEMENT
In the recent internet situations security has been a major concern in every field. Attacks might happen to one’s system or workstation from various ways. Starting from basic cyber security attacks like SQL injection, attacks may be of any extent like hacking big banks transactions and cyber theft. In simple scenarios it can be mitigated easily with simple measures. But when it comes to huge workstations with highly confidential data for example, data belonging to Central bureau of Investigation or esteemed research centers like NASA or ISRO or Banking industry needs high level of security measures to protect their data.

In such scenarios identifying the attack after it has occurred may lead to huge loss of data and leakage of confidential information. We need a mechanism to avoid occurrence of such attacks or monitor the system continuously to identify the probability of occurrence of such attacks. Continuous behavior analysis is necessary in order to identify and monitor occurrence of such attacks. But the problem in monitoring them is a need of a tool or a technique which gives us all the event details. The events must capture every action that is happening in a system and indicate when there is a serious possibility for attack.

4.1 Objective:
The motivation behind the NWE product is to provide a platform where user or analyst can view all the actions that have taken place in any machine in terms of events. Based on these events he can monitor all the actions and predict the possibility of attack in the near future.

Another motivation here is to automate the process of monitoring. That is the product captures all the events, and whenever there is any compromise found it raises a concern and with the help of few algorithms it generates the risk score which indicates how much the system is prone to attack and loss of data.

There are various algorithms designed to capture the compromising files, processes, or events etc. The output of those algorithms trigger any of the Indicators are compromises. And based on number of Indicator of Compromise, the risk score is generated for each machine.

4.2 Assumptions:
Following few assumptions are made with respect to product:
Agent should be installed in every machine which needs monitoring.

All the services are up and running.

Both Concentrator and LogDecoder enabled to capture the data.

The list of Indicators of Compromise is updated.

In order to say that the product is running fine and capturing all the data the above conditions must be satisfied. It is assumed that all those are satisfied before mentioning about how implementation and over all working is happening.

CHAPTER 5
ALGORITHMS
In this chapter few flowcharts are pseudocodes are presented to explain the overall working of netwitness endpoints product. The indication of overall flow is explained in two stages. First one explaining the overall set up flow of the product. And the second one is about calculating the risk score.

5.1 Flow chart -1
center63500
362616119284174795935192841737441933962491center431038000center3240405002927091222921300center120966200center1675100023017454667250center362241600center2661156center1646529center623972
Step 1: Deploy the orchestration server in the virtual appliance. Orchestration server is generically a combination of various services such as admin server, security server, config server, endpoint server, nginx server, etc. When orchestration server is deployed all these servers will get installed automatically and started.
Step 2: Install the UI package in some virtual appliance. The ip address assigned to it will be the ip through which we can access the product and the features. It also needs nginx support and any messaging queues to queue user queries and get the response from server.

Step 3: Install the agent in the host machine. Agent acts as the primary source of intelligence in our product. It helps in the basic functionality of capturing all the actions that are happening in a machine. Agent Packager is an executable used to configure and generate the agent installer that is deployed on the client hosts. The agent installer will then install and activate an agent when executed.
The Agent Packager is downloaded by entering the Host name or IP address of the NetWitness Endpoint server as Server, HTTPS Port by default 443 Port number. Then the certificate validation happens that determines how the agent will validate the NetWitness Endpoint (NWE) Server certificate. Validation has different types in it which is explained as below:
None : Indicates no validation is required. It is used during diagnosis only.
Full Chain : It is used by those clients who have their own certificates to validate. Full chain employs validation based on root certificate. i.e root certificate is checked during validation. Trust chain followed to go through verification. There is no revocation checks performed to allow Console Server to work offline.
Thumbprint : It is selected by default in the agent packager page. Certificate is generated during installation in the server store and is used when starting the Console Server. The server uses the certificate to identify itself and looks for the certificate from the agent. It performs direct validation of the agent thumbprint and is the most restrictive option.

Force Overwrite : overwrites the installed Windows agent regardless of the version. If this option is not selected, the same installer can be run multiple times on a system, but installs the agent only once.

Full Agent : The agent packager allows user to select or unselect the agent mode based on which it collects the data when deployed on the machine and send it to the endpoint server. When full agent is selected by the user while generating the agent packager, the agent when deployed on the machine should collect all the events related to processes, files and network.

Auto-Uninstallation : This option is made available for analysts’ purpose who wants to analyze the system in bulk and cannot keep on uninstalling all the agents. So the user can select date and time while creating the packager, and on that particular date and time the agent should get uninstalled.

The above steps must be followed in order to set up the product and keep it up and running. The whole process has various steps of authentication done internally. Such as authentication to check if agent packager installed is from trusted source or not, or to check if right user name and password used to login is correct and if not how it should be handled or during Role Based Access control what functionality should be accessible for whom and how it should be handles for different access permissions are handled.
Flow Chart – 2
center5927349center4956784center3944581center2899851center1929402center931156center22109
Step 1: Go to hosts page and select the preferred host on whose risk level needs to be checked. Click on the host to explore more options. Only those machines which have agents installed in it will be visible in the host page of the NWE product.

Step 2: Choose full scan or quick scan on the chosen machine and wit until the scan completes. This step is done to ensure the capturing of latest action updates from the host machines.
Step 3: Once scan status is changed to complete, go to investigate page and check for all the events. For example, file name, checksum associated with it, what is the host ip address, what is the server ip address, the alias name of the host, Indicators of compromises triggered etc will be captured and displayed in the event analysis page.

Step 4: The indicator of compromise triggered based on event follow a series of process. Those processes are explained in the following pseudocode:
Agent captures the data and sends to endpoint server.

Endpoint Server has predefined identifier to identify every harmful event which is called as ‘meta’.

Every Indicator of compromise is a sequence of and , or conditions.

The meta are the key elements for the query.

Based on queries that are satisfied, Indicator of Compromise is triggered.

Every Indicator of compromise has a severity assigned to it which is based on the range of values.

The value associated with every Indicator of Compromise triggered for particular time span is added up and the risk level of any machine is calculated.

Step 5: The above steps of calculating the risk score is repeated for every machine to calculate the score and based on which to decide where the machine stands in terms of getting attacked does. This calculation can be made periodical or manual where an analyst himself looks into it.

Step 6: The event analysis not just helps in identifying risk score but it also helps any user or analyst to analyze all the action items that have happened in the machine. On observing that actions itself sometimes analyst will be able to derive the kind of attacks that may happen. Analyst may follow pattern approach or use any specific algorithm or do a brute force way of identifying attacks. It completely depends on analyst’s requirement or his experience in terms of identifying the attacks. But the product is designed in such a way that it is flexible enough to meet most of the requirements of analyst or a user.

Here, in order to have the risk score we must have all the indicators of compromise that are identified and tested to be imported in the UI. If not the whole concept of risk score doesn’t stand as there is no query based on which the score is calculated. So one must make sure that all the Indicators of Compromise are updated as per industry standards and requirements.

CHAPTER 6
IMPLEMENTATION
In this section implementation details related to the work that was carried out during the course of internship. Here more focus is paid on the work progressed by me rather than the complete product. During internship I was a part of architecture team. The team that works ahead of every other team to design and try the implementation beforehand and analyze the pros and cons of the design and approve if the design is optimistic to satisfy the product requirements.

The work carried mainly focused on developing the risk score component of the product. Along with it identifying a permanent solution to extending the data storage in usage of MongoDB is one of the work. Also Dockerizing all the components of the EndPoint product.

6.1 Development Environment Requirements:
Programming Language: JAVA
Framework Used: Spring Boot
API technique used: RESTful API
IDE: IntelliJ
Project management tool: MAVEN
Virtualization: Docker
DataBase: MongoDB.

The development environment consisted of all the components mentioned above. Code is written in Java 8 with the help of Spring Boot Framework. The API’s created are REST calls which can be invoked in a browser or any external tool like ‘Postman’.
In order to establish No Sql way of storing MongoDB is used, as the data generated during product queries are not static and cannot be fit into older sequel tabular format. MongoDB is highly flexible in terms of storing and extending the storage during scarcity of storage space.

Unit test cases are written in Junit and Integration tests were written both in Junit and Groovy. Junit is one of the testing framework for the java development environment for test driven development. Groovy is a high level, dynamic, static typing and static compilation language. It easily integrates all the various components of java program. Especially in scenarios where there is an API call linking to backend and DB, complete integration testing can be with ease.

Risk Score Calculation:
Risk score is the indication to user or analyst on what risk level the system stands at. It is calculated considering various factors such as number of Indicators of Compromises triggered, the level of each Indicator of Compromise, their severity, and aging etc. all these factors considered together the system in ranked between 0 and 1023. 0 being lowest and 1023 being the highest risk level.

Indicator of Compromises are the basic computational units of risk score calculation. They are nothing but the sequence of queries which when all the conditions are satisfied, get triggered and that makes an account for risk score. The query consists of unique identifiers defined for each unique event that generically occurs in any machine. Those identifiers are called as ‘meta’. Based on meta and the condition on which a particular compromise can occur rules are written which are called as ‘Application Rules’ or ‘App Rules’. Each application rule has a unique name which is the Indicator of Compromise.

First thing that should be focused on is writing the Application Rules to triggered the Indicator of Compromises. Around 423 application rules are identified and are tested thoroughly to verify if they trigger the particular Indicator of Compromise or not. Once testing and verification is complete all the Indicator of Compromise are imported into the product’s UI and saved in the ‘Admin’ package. It should be verified if LogDecoder is started as service and if Concentrator is enabled to capture the data. LogDecoder is the service that handles sending meta to endpoint server from agent. Whereas Concentrator indexes the data. So if there both services are not enabled to capture the data. Otherwise meta sessions will not come and no Indicator of Compromise will get triggered.
The complete flow of how the Indicator of Compromises get triggered from end to end is explained in Algorithms chapter.

Risk score in UI is shown in terms or both number and color codes. Different colors have been assigned to every level between some ranges of numbers. That is explained below:
If risk score is zero the there is no harm to the system and it is indicated using the color ‘Green’.

If risk score is between 1 and 7 then there is very less malware behavior found. Only one or more indicator of compromises are contributing to risk score and the color used to indicate this is ‘Yellow’.

When risk score is found between 8 and 127 it is specifies slightly high level of malware behavior. There might be enough good indicator of compromise are triggered but might lead to false positive. This is displayed with the color ‘Orange’ specifying medium level of malware activity.

Suppose the indicator of compromises triggered are high and are good enough to indicate the harm the risk score increases to number between 128 and 1023. This indicates ‘High’ level of malware activity. The color used to indicate this is Red.

There can be a possibility where very high malware activity has happened which leads to triggering of all the indicator of compromises. In such scenarios it’s a critical stage which obviously needs admin’s attention to check and verify the activities happening in the system. The score raises to 1024 in such situation and color used to indicate this is ‘Black’.

Based on above color codes it becomes visibly easy for analyst or user to determine which machines need high attention.

Dockerising the product components:
During the course of internship I worked on a research project that concentrated on Dockerizing the components of product. The main reason for using docker is it becomes easy to package and ship the product. Each service of the product is converted to a docker container.
These containers are integrated and shipped as one component. It becomes easy to deploy it in any environment without hazel as all the configuration settings are already defined inside the docker containers.

As the product consists of multiple services, there is a need to create multiple containers for each service. The image of every container is created and is stored inside the centralized docker hub. These multiple containers need to be integrated together to work in synchronization. The integration is done with the help of docker – compose file. It is a .yml file similar to that of configuration files. In compose file we mention the service to be containerized, the link to the image file, host name of the service, the port numbers that the service is going to use to connect, dependencies associated with that service, volumes, etc. Example of a compose file is shown below,
Docker-compose.yml:
version: ‘2.1’
services:
service-a:
image: dockerhub.com/rsa/service – a:11.0
mongo:
image: dockerhub.com/3rd/mongo
hostname: mongo
ports:
– “27017:27017”
volumes_from:
– service-a
rabbitmq:
image: dockerhub.com/3rd/rabbitmq
hostname: rabbitmq
ports:
– “15672:15672”
– “5672:5672”
service-b:
image: dockerhub.com/rsa/service-b:11.2.0-latest
ports:
– “8080:8080”
depends_on:
– rabbitmq
– mongo
The services and data presented in the above compose file are not exact and are written only for example purpose.

As shown in the docker-compose file we can integrate multiple services to work together as one service. When it specifically comes to our product we create the images of Endpoint Server, Admin Server, Security Server, Config Server, Nginx Server, rabbitmq, mongoDb etc. Push those images into our centralized repository from where it can accessed across the organization. Then add the path to the image, set the respective port numbers, dependencies etc and ship them as one container. Running the compose.yml file should enable all the services along with the dependencies in place.

In docker-compose file there is also a provision to specify environmental variable settings like setting up the username and password for the container, specifying the transport bus, enabling authentication, mentioning the control and application servers, if authentication to be performed or not etc. Sample environment setting is shown below,
environment:
service.password: password
transport.bus.host: rabbitmq
data.control.servers0: mongo
data.application.servers0: mongo
process.shutdown-delay: 1s
security.authorization.remote-enabled: “false”
security.pki.use-deployment-trust: “false”
transport.bus.shutdown-timeout: 1ms
transport.http.secure: “false”
security.oauth.auth-server: “true”
In the above example the first configuration is about setting the password for the environment. This indicates first level of security setting to use the service. Next the transport bus used is rabbitmq. Control and application DB both are mongo db having different port to access them. Then time outs for process shut down delay, transport bus shutdown timeout are set. Next thing is securing the transport service which will be ‘true’ if we use ‘https’ and false if we use ‘http’ as the transport protocol. Last field mentioned there is the authentication server is should be enabled or not.
All the fields inside compose file are customizable as per the requirements. And Environments field can either be set specific to every service, or can be set globally which is generic for all the service.

Dockerizing the components helped us very much while testing the product as whole using different Integration tests. As docker is used for virtualization the integration tests can be made to run in any platform even on different machines to test the product compatibility.

MongoDB Data Models:
As we know that data generation is not static and keeps appending. With increase in the data it might become difficult for user to accommodate the storage needs without loosing any data. So user might either consider extending the disk or ssd storage. But moving mongodb data becomes a challenging task as user have no idea what is stored inside which collection as it is stored in an encoded format. In such situation user can adopt different data model techniques to differently store the collections which helps him to move them easily. The solution proposed here is for both windows and linux operating systems.

MongoDB configuration file is again a YAML file. It will be stored in the path ‘/etc/mongod.conf’ in linux operation system and ‘<install directory>/bin/mongon.cfg’ in windows operating system.
When we install mongodb, a folder called ‘data’ and ‘db’ inside it should be created in windows os whereas in linux it gets created by itself. All the collections that we create gets stored into the db folder. Usually two files will be stored related to each collection. That is a collection file that consists of the actual data but in encoded format, and an index file associated with every collection. So whenever we want to move files due to insufficient space we never know which index file to move with which collection file. In order to overcome this kind of problems we adopted two kinds of storage ways for mongodb. One among them is storage technique where collections will be stored together in one ‘collection’ folder and indexes will be stored in the ‘index’ folder. Another storage technique is, storing the collection and its associated index are stored together. Whichever suits the storage requirement that can be adopted. MongoDB config file is where we configure all the ways of storage. Following is one of the example for MongoDB configuration file,
systemLog:
destination: destinationFile
path: path to mongodb log file
logAppend: true/false
storage:
journal:
enabled: true/false
processManageent:
fork: true/false
net:
bindIp: ‘ip addess’
port: port number
setParameter:
enableLocalhostAuthBypass: true or false
The systemLog systems are related to storing the logs related to mongo actions. We can specify the destination file and the path where we want to store the logs. ‘storage’ field is where we specify the storage specification. Network settings, Security settings can also be customizable with the help of configuration file.

Following configuration file is the one that we used in our project,
systemLog:
destination: file
path: ‘c:datalogmongod.log’
storage:
dbPath: ‘c:datadb’
directoryPerDB: true
wiredTiger:
engineConfig:
directoryForIndexes: true
According to the above configuration file,
SystemLog is for storing the related logs where path is inside in ‘datalog’ folder. All the logs will be stored in that file.

Storage folder has dbpath that indicates to which part of the storage, these configurations should be applied.

directoryPerDB is a choice for us to create separate directory for every collection. If true it will be created otherwise not.

wiredTiger is nothing but a storage engine,
directoryForIndexes is again a choice for user to have indexes in separate directory.

CHAPTER 7
RESULTS
In this chapters various screenshots of the product are attached which act as proof of concept for the implementation details.

7.1 Host View:

Figure 1.2 : Global host page view
The above diagram indicates the global hosts view. Host can be laptops, work-stations, servers, tablets, routers, or any system, physical or virtual, where a supported OS is installed. Each host contains operation system details, agent details (version, installed time, ID, monitoring mode, last seen time, state, last scan time), hardware details (CPU, RAM), locale information, logged-in users, network interfaces, and security configurations.

The Hosts view provides a view of all hosts with a NetWitness Endpoint agent installed. In the Host view, user can:
Select a host from the Hosts table to view detailed information.
View detailed scan results.
Export all categories of scan data for the selected host for a specific scan time in JSON format.
Search on all snapshots (file name, file path, and file SHA-256 checksum). For example, if you search for a file name cmd.exe for a selected host, the result will display all snapshots containing this file name. If you want to search for SHA-256 checksum, provide the entire hash string. This result will display the search value.
View related information of the host in the following subtabs: Overview, Processes, Autoruns, Files, Driver, and System Information.
Sort and filter hosts.

7.2 Hosts Filter view:

Figure 1.3: Hosts filter view
The figure 7.2 depicts the filter option for hosts in the hosts page and the saved filters on the left panel with saved names for applied filters
7.3 Host Detail View:

Figure 1.4: Host Detail view
The above figure 7.3 Host Details Page View describes the details of the hosts including the processes, files on that particular host, autoruns, drivers, respective properties like process has process properties, each module in a files tab has file properties, autoruns have its own properties that are displayed on right side panel. The scanned data information is stored as a snapshot and it is created each time a scan is run and successfully completed.

7.4 Files View

Figure 1.5: Global Files page view
The Files view provides a list of unique files found in the deployed environment and relevant information collected during an investigation. These files are executable. There are three out-of-the-box executable that are supported:
Portable Executable (PE) (Windows) – These are exe, dll, and sys files. Each file contain information about checksum, compile details, different sections present in the file, imported libraries, and certificate details (signer, thumbprint, company name).
Macho (Mac) – These are app bundles, dylibs, and kernel extension. Each file contains information about checksum, different sections present in the file, imported libraries, and certificate details (signer, thumbprint, company name).
Executable and Linkable Format (ELF) (Linux) – Each file contain information about checksum, different sections present in the file, imported libraries, and RPM package details.
7.5 Files Filters

Figure 1.6: Files filters dropdown
The figure 7.5 depicts the filter options for files pages with properties related to files. Saved filter are on the left panel with saved named for applied filters.

7.6 Files Details

Figure 1.7: Files details of particular host
The above picture 7.6 depicts all the files present on the agent box at that snapshot. That mean all those .sys, .dll, .exe present at that time of scanning. The entries may be duplicate as the same file might have run as process or autorun or may be a library. The file property panel includes the MD5, SHA1 and SHA256 hash and path of the file. These two options are provided as search options for the user like search by file hast or search by patch.
7.7 Search on Snapshots to Investigate Suspicious Hosts

Figure 1.8: Specific host page which is suspicious host
If user is investigating a host for a suspicious activity or investigating if it is infected with a known malware, he/she can search for occurrences of file name, file path, or SHA-256 checksum for the selected host. The result will display the search value with details, such as file name, signature information, along with its interaction with the system (ran as process, library, autorun, service, task, or driver). To view more details on these results, user can click on the category.To search for SHA-256 checksum, user just needs to provide the entire hash string in the search box. For example, a user who has clicked and executed a malicious attachment through a phishing email, and downloaded it to C:Users.

To investigate this file:
Enter the file path C:Users in the search box. The search will display all the executables in this folder. In this example, the file server.exe, is an unsigned file that might be malicious.

This file has run as a Process and an Autorun.

To view details of the file, click Autorun or Process. The following screen shows the Autorun page where you can view the file name and registry path.

Reviewing Process
In the Host view, user can select the Process tab to view the processes that were running for the selected host at the time of scan. When reviewing processes, it is important to see the Launch Arguments. Even legitimate files can be used for malicious purposes, so it is important to view all of them to determine accurately if there is any malicious activity.

Reviewing Autoruns
In the Host view, user can select the Autoruns tab to view the autoruns, services, tasks, cron jobs that are running for the selected host. For example, in Services, you can look for file creation time. The compile time is found within each portable executable (PE) file in the PE header. The time stamp is rarely tampered, even though an adversary can easily change it before deploying to a victim’s endpoint. This time stamp can indicate if a newly created file is introduced. Then user can compare the time stamp of the file against the reported created time on the system to find the difference. If a file was compiled a few days ago, but the time stamp of this file on the system shows that the file was created a few years ago, it indicates that the file is tampered.

Reviewing Files
In the Host view, user can select the Files tab and he/she can view the list all files scanned on the host at the time of scan. By default, it displays 100 files. To display more files, user have to click Load More at the bottom of the page. For example, knowing the file name, many trojans write random filenames when dropping their payloads to prevent an easy search across the endpoints in the network based on the filename. If a file is named svch0st.exe, scvhost.exe, or svchosts.exe, indicates that someone is obviously trying to mimic the legitimate Windows file named svchost.exe. On the other hand, if a file has a random-looking filename, then it might be infected as Trojans write random filenames when dropping their payloads to prevent an easy search across the endpoints in the network based on filename.

Reviewing Libraries
In the Host view, user can select the Libraries tab to view the list of libraries loaded at the time of the scan. For example, a file with high entropy will get flagged as packed. A packed file means that it is likely compressed to reduce its size (or to obfuscate malicious strings/configuration information).

Reviewing drivers
In the Host view, user can select the Drivers tab to view the list of drivers running on the agent’s machine at the time of the scan.For example, using this page, you can check if the file is signed or unsigned. File that is signed by a trusted vendor such as Google, Apple, Oracle, and so on, with the term valid, indicates that the file is not malicious.

Figure 1.9: Drivers view with .dll extension in the Global File page view
Reviewing System Information
In the Host view user can select the System Information tab. This page lists the agent system information. For Windows, the page displays the host file entries and network shares of that host. For example, malware uses host file entries to block detection by security software by blocking the traffic to all download or update servers of the most well-known security vendors.

7.8 Database Retention

Figure 1.10: Database Retention page view in UI
The DataBase Retention feature is added to ensure there is no memory wasted because of the inactive agents and also unnecessary data of the agents that are active. For example; if a user goes on a holiday for a long time say for three months and that time the communication between the agent and the endpoint server is cut. So in the above figure, if the “Inactive Agents Rettention Policy”, threshold is set as 60 Days, then the data of those agents that are inactive since from sixty days are deleted from the UI and also from all the collections of the DataBase so that no data is left out in the DataBase too. The “DataBase retention Policy”, if the threshold is set to say 30 days then, the snapshots of the machine that are older than thirty days are deleted from the UI in the “Snapshot time” drop down and all the snapshots that are older than thirty dyas from the DataBase from the collection “command”, “filecontexthistory” and “machinehistory”. The below figure shows collections in the database.

Figure 1.11: DataBase Retention of all the collections of the active agents in the MongoDB
So the document “object” with some ID same as the agent whose data is older than the threshold value will be deleted from here.

7.9 Export to CSV in Hosts page:

Figure 1.12: Export to CSV file in hosts page
The above fig 7.11 depicts the export to CSV file downloaded with default visible columns (to the left) and after removing the some of the visible columns (to the right). The export to CSV file should contain only those fields that are visible on the UI at the time of downloading the file.

7.9 Export to CSV Files Page

Figure 1.13: Export to CSV file in files page
The above picture 7.12 depicts the export csv files of files page with default visible columns (to the left) and after removing some of the visible columns (to the right).

7.10 Admin Page

Figure 1.14: Admin page view
The above picture 7.13 depicts the Admin Page that lists all the services among which endpoint server is the service related to database retention, scan scheduling and agent packager. These are the services that are provided by the endpoint Sever to the user to create, maintain and store the data related to agent. The create service is provided by the Agent Packager service, maintenance service is provided by scan scheduling service and storing service is provided by data retention service.

7.11 Packager

Figure 1.15: Agent Packager page to generate agent packager
The above picture 1.15 depicts the packager service provided by Endpoint Service to generate the agent packager by filling the mandatory fields.

7.12 Generate Agent Executables

Figure 1.16: Agent Packager page to generate executable for windows, MAC and Linux machines
The above picture 1.16 depicts the generation of executables after unzipping the downloaded zipped folder of agent packager and running the agentpackager.exe file. Client certificate password is asked to the user before generating the executables. Once if the certificate password is matched the executables for Linux, Windows and MAC systems is generated.

CONCLUSION
The overall intention of the product is to provide a better security at every network level. It helps large organizations and
BIBLIOGRAPHY
Community.rsa.com, ‘NWE User guide’, Online. Available: https://community.rsa.com/servlet/JiveServlet/downloadBody/72935-102-3-129422/rsa_nwe_4.3_user_guide.pdf. Accessed: 12- Aug- 2017
RSA, ‘RSA ECAT’, Online. Available: http://india.emc.com/security/rsa-ecat.htm. Accessed: 25- July- 2017
Will Brady and Jeffrey Elkner, ‘Web Browsers’, 2017. Online. Available: http://openbookproject.net/courses/intro2ict/web/web_browsers.html. Accesses: 12-
John Willis, ‘Docker and the Three Ways of DevOps’, 2015, Online. Available: https://www.docker.com/sites/default/files/WP_Docker%20and%20the%203%20ways%20devops.pdf. Accessed: 10- April- 2018
Tom Huston, ‘What Is MicroServices Architecture’, 2018, Online. Available: https://smartbear.com/learn/api-design/what-are-microservices. Accessed: 25- Nov- 2017
‘Microservice Architecture’, Tutorials Point (I) Pvt. Ltd, 2017 Online. Available: https://www.tutorialspoint.com/microservice_architecture/microservice_architecture_tutorial.pdf. Accessed: 09- Mar- 2018
‘Nginx Powers’, Online. Available: https://www.socallinuxexpo.org/sites/default/files/presentations/NGINX_101_SecretHeart.pdf. Accessed: 12- May- 2018
‘IntelliJ IDEA, the most intelligent Java IDE’, 2016, Online. Available: https://resources.jetbrains.com/storage/products/intellijidea/docs/Comparisons_IntelliJIDEA.pdf. Accessed: 10- May- 2018
Mendel Rosenblum, ‘Node.js’, 2016, Online. Available: http://web.stanford.edu/class/cs142/lectures/NodeJS.pdf. Accessed: 11- May- 2018
‘Mocha Documentation’, 2018, Online. Available: https://media.readthedocs.org/pdf/mochajl/latest/mochajl.pdf. Accessed: 11- May- 2018
RSA, ‘RSA NetWitness® Endpoint’, Online Available: https://www.rsa.com/content/dam/pdfs/1-2017/h149043-rsa-netwitness-endpoint.pdf. Accessed: 15- Nov- 2017
RSA, JAZZ Online Available: https://www.rsa.com/enus/products/threat-detection-and-response/endpoint-threat-detection-and-responseRSA, “JAZZ Architecture”, Online Available: https://wiki.na.rsa.net/display/ECAT/JAZZRSA, “IOC machine score calculation”, Online Available: https://wiki.na.rsa.net/display/ECAT/ECAT+Scoring+System
RSA, ‘Agent Packager User Guide’, Online. Available: https://wiki.na.rsa.net/display/ECAT/Enpoint+Hybrid+OVA+Installation