That path to Ai: #1 Intro to the fancy normal science.
Cybersecurity field grew so fast that now the industry needs many cybersecurity professionals to defend and patch issues in networks and web applications before cybercriminals attempt to do it. Still, thankfully we have nowadays a lot of conferences and many people working together to make the internet a better internet.
In 2018 the technology field had a significant trend of Artificial intelligence. Every tech press covered the developments in the area as well as what will they expect to develop shortly. Moreover, of course, some of them wrote articles saying robots based on Artificial intelligence technology will take over the city. It sounds like fiction. Still, sadly that is what people thought. Artificial intelligence was never a new topic in computer science. In the mid-1950, a generation of scientists, mathematicians, and philosophers already existed with the idea of artificial intelligence. One of them was Alan Turing. He discussed in his paper techniques and theories to build intelligent machines because Turing had to make a computer at that time for a 200000$ his project could not be accepted mostly with that budget. It was not that long until a group of researchers (Allen Newell, Cliff Shaw, and Herbert Simons) proved the concept by creating a program that can mimic the way human solve problems. The research got funded by RAND (Research and Development Corp), and it got released in DSRPAI (Dartmouth Summer Research Project on Artificial Intelligence) conference in 1956. it was an early time to show up and prove to people that there is a way to make machines intelligently solve a problem. Despite all of this, that was just the beginning of significant upheaval in science, researchers started investigating more in this science, and the more they approached it, the more accurate it got. That is precisely why everyone is looking forward to seeing the limit of this science and what achievement with it and what will be the limit and disadvantages.
After a while technology raised, the information became more available, and the hardware got smaller and more powerful. Thanks to the researchers that got involved and helped developing technology, if all of this availability of space storage and fast processing power and nowadays GPU (Graphics processing unit) used in artificial intelligence due to their strength in calculating faster than the CPU (central processing unit).
Availability of the right hardware with the straight lines of software end up giving surprising results. Now we have what it takes to create something marvelous. We can practice the same methodology to build a piece of software that will copy the human solving in the Cybersecurity field precisely in the offensive part (PEN-testing*).
Understanding the way some algorithm works is a good start. Still, itis not enough to follow the exact methodology used in a good project that successfully worked to have a successful result. Creating an intelligent tool needs testing and trying and essentially to know how this science work, adding to it a useful data from a cybersecurity perspective. The scientist may then apply the scientific methodology to take the same decision that a very good PEN-tester or an (InfoSec-Analyst*) would take in a real-life situation. If the science of artificial intelligence fulfils the needs of this sector, it would be a relief for any Cybersecurity specialists, including a safer internet will start to rise stronger.
Suppose we dive a little bit in this mysterious science. In that case, we will reveal that it is just a normal science composed of components that contain mathematical functions, probabilities. It takes the best decision based on a well-processed data that can be anything from files, documents, meta-data. All that it takes is gathering some historical data. Then from a new present situation, the algorithm can predict what can happen in the future with accuracy, somehow, an excellent way to describe it, as making a time machine that cannot fix the past, but can predict the future based on available data and a little bit of science.
These powerful things, smartphones, powerful hardware, workstations, we have them all in our hands from day to day.
So, if we link the big problems that scientists faced before, which is creating a perfect intelligent machine, we notice that this issue no longer exists with all the availability. Nowadays, we have fast internet bandwidth and ample storage of Terabytes in small hard drives that look like RAM (Random access memory). Giving a try and research artificial intelligence is a necessity now. Because if nobody reveals the power of this science and share the knowledge, and accept it as a science and work hard on getting the best of it, only one thing will happen, and it already started to happen. Big companies are using it in their advantage from tools of OSINT*, data gatherers, and online scrapers. To further understand one has to look at what happened with Cambridge Analytica, the research company worked on data science and machine learning as well as neuro networks that will discuss later on. They used a third-party app data that was available in Facebook and public data available about individuals and their relation shits with each other and what they like and hate, they collected 3 million user data and 87 million of their "mutual friends". They started influencing individuals with fake advertising and other techniques to believe that some politicians are better from the others, and here we look from the point of view. We can see that it is going to sink if we do not act and treat the risk and build a defence. If a private company takes the simple forgotten science and invest time and money in it, and do all these things, we can do precisely the same, and make algorithms that will defend a network of a company or specific server. It is possible to build with this amazing science.
It will be good research to explore all the power of AI to prevent malicious manipulation from third parties that can exploit populations behind the scenes without anybody's knowledge. A group of readers will question, why does it must be an offensive tool and not make a fair IDS (intrusion detection systems) and IPS (Intrusion prevention systems).
A common misunderstanding in cybersecurity is that threats only use available exploits against a company or organisation. However, this is far from real-life scenarios, because patching management alone is not enough for secure technology infrastructure. We often hear big companies getting hacked and we wonder what exactly the cause, what was the risk, and how all of it happened. Let us take a look at Twitter August 2020 Hack, The Cybercriminals attempted to penetrate the company from their support service, and they used standard social engineering techniques to gain access to the administration tools of Twitter. It succeeded, and they had access to any account. They did spread some fake news to make money from the weakness they found, this can be any other organisation and to solve this problem Red Teaming come to play a role, the Red teamer captures the threat perspective. It does have the military philosophy, this methodology is 100% working and approved by many big companies that adopted Read teaming in their defensive capabilities, its effectiveness grows when tested under actual stress condition of a real-life scenario.
According to NIST Special Publication 800-53 (Rev. 4) CA-8 1, a PEN tester is defined as so "Penetration tester is a specialised type of assessment conducted on information systems or individual system components to identify vulnerabilities that could be exploited by adversaries...".