Archive | Boulot

RESTful API for the dummies

An Application Program Interface aka API is code that allows two software programs to communicate with each another. The API spells out the proper way for a developer to write some code requesting services from an operating system or other applications. In a web context, APIs allow communication between two webapps (or a web app and a client that is not always a brose) without any browser.

A RESTful API, also referred to as a RESTful web service, is an Application Program Interface that uses HTTP requests to GET, PUT, POST and DELETE data.

A RESTful API  is based on REpresentational State Transfer (REST) technology, an architectural style and approach to communications often used in web services development.

REST technology is generally preferred to the more robust Simple Object Access Protocol (SOAP) technology because REST leverages less bandwidth, making it more suitable for internet usage.

The REST used by browsers can be thought of as the language of the internet. With cloud use on the rise, APIs are emerging to expose web services. REST is a logical choice for building APIs that allow users to connect and interact with cloud services. RESTful APIs are used by such sites as Amazon, Google, LinkedIn and Twitter.

 

 

A la base, j’aimais bien cet article pour une partie de son contenu mais pas pour le plan de construction et comme j’avais besoin d’expliquer pourquoi certaines API ne respectaient pas les concepts REST, j’ai eu besoin de ré-écrire cette introduction.

Posted in Boulot, Clic, Trop sérieux0 commentaire

Cyber Range

Ecoute attentive ce jour du dernier podcast hebdomadaire de « No Limit Secu » consacré au(x) Cyber Range(s)

Parmi les invités on retrouve :

Diateam

Diateam

Sysdream

Sysdream

Cyber Test System

Cyber Test System

Ces trois spécialistes sont interrogés par les animateurs habituels Hervé Schauer, Nicolas Ruff et Johan Uloa.

Définition officielle du cyber range ?

3′ : Juste une infrastructure ?

GP : c’est pas nouveau … Très variable selon les moyens que l’on veut y mettre : copie d’un système réel dans lequel on vient s’entraîner. Champ de tir numérique ?

Histoire : 2008, idée de faire tests d’un système complet (National Cyber Range de la DARPA) au delà du simple tests « unitaires » d’équipements.

7′ : HS : les Arc-en-Ciel Team ?

AK : Au départ seulement attaque-défense entre 2 équipes (Red vs Blue) puis d’autres acteurs ont rejoint l’environnement.

Read Team : équipe d’attaquants. Soit ils ont leurs propres outils soient ils activent des générations automatiques d’attaques et de trafic.

Greeen Team : simulation de trafic légitime pour faire fonctionner le système

Yellow Team : l’équipe qui participe involontairement au scénario de l’attaquant. Ils prennent part à une activité malicieuse sans forcément s’en rendre compte.

Blue Team : elle assure la supervision, ce n’est pas seulement la défense (SOC, NOC, Réponse à incident). Elle assure le bon fonctionnement de la Blue Team => les experts ne sont pas forcément exactement en phase sur le sujet.

White Team : ils ont le contrôle de l’exercice global

Purple Team : relais d’info, legal, communication.

OF : bonne analogie entre cyber range et simulateur de vol.

17′ : Formation avant entraînement ?

OF : Le cyber range est un prolongement de la formation. La formation ce sont de petits exercices courts avant de passer en environnement complexe qui ressemble à son environnement réel.

NR : s’inquiète de la possibilité de reproduire, ne serait-ce qu’en termes de licences (ou dongles) dans l’environnement réel

GP : la formation c’est du video training +  slide + labo exercice sur sa VM. Petit rappel de Confucius et du constructivisme par GP : Tout ce qu’on voit on oublie, tout ce qu’on fait, on retient ! Puis GP nous fait une petite analogie avec les Centre d’Entrainement en Zone UrBaine (CENZUB – Centre de l’armée de terre à Sissone). A ne pas confondre avec l’eunuque (SansZob) (Note du rédacteur pour vérifier si tout le mode suit). Entraînement en environnement urbain des forces.

Un cyber range c’est un environnement qui permet d’opposer des défenseurs et des opposants. Il faut qu’il y ait de la vie dans le système. Les partenaires technologiques de BlueCyForce permettent l’

NR : Concernant toujours la représentativité du système, comment fait-on pour simuler Virus Total ?

GP propose un faux Virus Total, Twitter (basé sur Mastodont), AFP … et on regénère la plateforme à chaque séquence. Malgré tout, on grille ses touches et ses backdoors à chaque jeu.

NR : Pas possible de faire du passive DNS sur 3 ans ou rapports publiques et indicateurs de compromission => on travaille en vase clos.

AK : mise en place d’un système de QuarksLab et applications de vie spécifiques

25′ NR : Quid de la qualité de scenario, qui les écrit ? (le client qui ne connaît pas ses risques ou le fournisseur qui ne connaît pas le métier)?

GP nous perd un peu… mais ajoute que c’est un peu des deux et qu’il existe à la fois une Blue Team Teachnique et une Blue Team management. Exemple : 22 pages de timeline et 32 pages de plateforme reçreçues du client. Cela dépend du niveau d’exigence du client.

NR : est-ce que vous simulez des consultants ?

GP : si c’est demandé (quand cela permet de rajouter des éléments de contexte), on le fait. Un exercice de gestion de crise, c’est une utilisation du Cyber Range. Exemple du cameraman qui intervient au milieu de la nuit …

(outre le fait de savoir à quel moment ils allaient parler de l’ENSIBS … je voulais connaître le point de vue de chacun sur le sujet et cela faisait écho à la présentation d’Eric Weber de Thales Communication and Security sur le sujet à C&ESAR 2017 : Problématique de formation des opérateurs face aux menaces Cyber : utilisation des Cyber Range )

31′ : NR : qui fait quoi avec des Cyber Range ? Quel est l’état du marché ? Formation au pentest ?

GP : Prestation d’entraînement et de formation, vente de cyber range avec +/- de prestation.

OF : on cherche à suivre les performances des joueurs. Etablir un ranking / score gobal. L’idée est que chaque année cela s’améliore

AK : ….

HS : Le Cyber Range n’est-il pas plus utile pour la défense que dans l’attaque que NR transforme en une question sur le glissement des plateformes de cyber range avec un bout d’ANSIBLE et trois clics dans le Cloud ?

GP : Au contraire les gens veulent avoir leur plateforme … même si BluecyForce a annoncé au FIC du cyber range as a service (30% des usages). Un hyperviseur avec des VMs ne font pas un CyberRange. Compliqué d’aller mettre dans le Cloud, ton routeur chiffreur de l’OTAN. Le cyber range c’est : Mon infra, mes vulns, mon chemin ! Comme au CENZUB c’est un environnement le plus réaliste possilbe pour s’entraîner à répondre aux cyber-attaques

OF : le cyber range permet de faire de la formation, de la gestion de crise mais aussi du challenge qui permet de se former en s’amusant (Coucou Malice ???)

AK : les directives telles NIS obligent à suivre des entraînement continus. Déjà le cas en zone Asie-Pacifique les entreprises critiques sont obligés d’envoyer ses salariés suivre des entraînements à l’extérieur

HS : Si on s’entraine pas régulièrement, en trois ans on a tout oublié de la formation

GP n’est pas nostalgique mais nous raconte l’évolution de son cyber range depuis SSTIC 2005 : Simulation hybride de la sécurité des systèmes d’information : « vers un environnement virtuel de formation ».  SPLUNK (pseudo SIEM… so right !), QRadar, Prelude Keenaï : on ne fera pas d’entraînement pertinent à des gens s’ils n’y croient pas, s’il n’est pas représentatif. L’important c’est d’y croire. Eh Guillaume, y’a plus de carburateurs depuis longtemps 🙂

JU : Comment simule-t-on la pression psychologique de la crise ?

GP nous met en garde sur la multiplicité des Teams mais le besoin d’avoir une équipe d’acteurs crédibles

NR : simuler une équipe de l’ANSSI qui débarque c’est possible ?

Mot de la fin :

49′ HS : l’entrainement c’est ce qui fait le succès de ce qu’on a enseigné au départ. La formation c’est bien, l’entrainement c’est mieux. Teddy Riner s’entraine tous les jours

(outre le fait de savoir à quel moment ils allaient parler de l’ENSIBS … je voulais connaître le point de vue de chacun sur le sujet et cela faisait écho à la présentation d’Eric Weber de Thales Communication and Security sur le sujet à C&ESAR 2017 : Problématique de formation des opérateurs face aux menaces Cyber : utilisation des Cyber Range)

en goody : Un bon doc d’Airbus DS sur le sujet.

 

Offre Cyber Range (merci Airbus DS)

Offre Cyber Range (merci Airbus DS)

Posted in Boulot, ClicCommentaires fermés sur Cyber Range

Fast Identity Online (FIdO)

Arrivera-t-on un jour à supprimer l’authentification p identifiant / mot de passe ? Je découvre ce jour FIdO.

Qui l’implémente aujourd’hui ? Le site de l’Alliance FIdO recence au  19/02/2018, 18 implémentation commerciales du standard parmi lesquelles DropBox, Pypal, Facebook …

FIDO (Fast ID Online) is a set of technology-agnostic security specifications for strong authentication. FIDO is developed by the FIDO Alliance, a non-profit organization formed in 2012.

FIDO specifications support multifactor authentication (MFA) and public key cryptography. Unlike password databases, FIDO stores personally identifying information (PII), such as biometric authentication data, locally on the user’s device to protect it. FIDO’s local storage of biometrics and other personal identification is intended to ease user concerns about personal data stored on an external server in the cloud. By abstracting the protocol implementation with application programming interfaces (APIs), FIDO also reduces the work required for developers to create secure logins for mobile clients running different operating systems on different types of hardware.

FIDO supports the Universal Authentication Framework (UAF) protocol and the Universal Second Factor (U2F) protocol. With UAF, the client device creates a new key pair during registration with an online service and retains the private key; the public key is registered with the online service. During authentication, the client device proves possession of the private key to the service by signing a challenge, which involves a user-friendly action such as providing a fingerprint, entering a PIN, taking a selfie or speaking into a microphone.

With U2F, authentication requires a strong second factor such as a Near Field Communication (NFC) tap or USB security token. The user is prompted to insert and touch their personal U2F device during login. The user’s FIDO-enabled device creates a new key pair, and the public key is shared with the online service and associated with the user’s account. The service can then authenticate the user by requesting that the registered device sign a challenge with the private key.

The history of the FIDO Alliance

In 2007, PayPal was trying to increase security by introducing  MFA to its customers in the form of its one-time password (OTP) key fob: Secure Key. Although Secure Key was effective, adoption rates were low — it was generally used only by few security-conscious individuals. The key fob complicated authentication, and most users just didn’t feel the need to use it.

In talks exploring the idea of integrating fingerscanning technology into PayPal, Ramesh Kesanupalli (then CTO of Validity Sensors) spoke to Michael Barrett (then PayPal’s CISO). It was Barrett’s opinion that an industry standard was needed that could support all authentication hardware. Kesanupalli set out from there to bring together industry peers with that end in mind.

The FIDO Alliance was founded as the result and went public in February 2013. Since that time, many companies become members, including Google, Microsoft, ARM, Bank of America, Master Card, Visa, Microsoft, Samsung, LG, Dell and RSA. Today, FIDO authentication is guided by three mandates: ease of use, standardization and privacy/security.

strong authentication
multifactor authentication
public key
PII
biometric authentication
one-time password
strong password

Posted in BoulotCommentaires fermés sur Fast Identity Online (FIdO)

Is Rugged DevOps the new buzzword ?

Read today on WhatIs.com :

« Rugged DevOps is an approach to software development that places a priority on making sure code is secure before it gets to production. Rugged DevOps takes the lean thinking and Agile mindset that DevOps embraces and applies it to « ruggedizing » software and making sure that security is not a post-development consideration. Rugged DevOps is often used in software development for cloud environments.

The approach requires programmers and operations team members to possess a high degree of security awareness and have the ability to automate testing throughout the software development lifecycle. Despite a large percentage of the IT industry adopting agile and DevOps processes, security testing cycles are still often based on the traditional and cumbersome waterfall approach. This means many organizations forget to do security qualifications tests, such as PCI compliance checks and risk assessments, until it’s almost too late.

To sync security with DevOps cycles, a rugged DevOps team must log integration and delivery processes at a very granular level, so security issues can be identified as they arise. The more granular the records are, the easier it becomes to identify security holes. Both Jira and Cucumber are popular tools for keeping logs in rugged DevOps environments.

For automated DevOps and security testing, there’s a large portfolio of product types, including static application security testing, dynamic application security testing, interactive application security testing and runtime application security testing. Vendors include Contrast Security, Fortify, Veracode and Waratek »

That beein said, what’s the Difference Rugged DevOps and DevSecOps ?

Here is an answer from Sumologic :

In traditional software development environments, security has always been considered a separate aspect – even an afterthought – but now the two practices have emerged to produce safer software in the form of Rugged DevOps and DevSecOps.

A Rugged approach to development and deployment produces applications that stand up to the rockiest tests

Rugged DevOps is an emerging trend that emphasizes a security first approach to every phase of software development. DevSecOps, which combines traditional DevOps approaches with more a more integrated and robust approach to security. These approaches are not mutually exclusive, and take slightly different paths toward the same goal of shifting security leftward and continually focusing on it through the production pipeline.

As today’s environments evolve toward continuous delivery models that can see multiple production releases per day, any miscalculation or error in security can clog the production pipeline. Below is a look at how both Rugged DevOps and DevSecOps approaches can help your organization achieve state of the art design security.

What Is Rugged DevOps?

Rugged DevOps takes the traditional view of security teams as an obstacle and turns it upside down, engineering security into all aspects of design and deployment. Instead of security playing the role of traffic cop slowing down progress, a Rugged DevOps approach makes security a kind of police escort, helping the delivery process proceed with speed and safety.

Rugged DevOps starts with creating secure code. In traditional models code is developed, then penetration testing and automated tools are used to deem the software ‘safe.’ Secure code development involves a different approach, where previously separate teams (development, Q/A, testing, etc.) interact throughout the entire software lifecycle, addressing not just security holes but industry trends and other factors to develop ‘defensible’ code through communication, collaboration, and competition.

The key components of a successful rugged development culture are outlined in the Rugged Manifesto, the definitive document on the subject:

I am rugged and, more importantly, my code is rugged.
I recognize that software has become a foundation of our modern world.
I recognize the awesome responsibility that comes with this foundational role.
I recognize that my code will be used in ways I cannot anticipate, in ways it was not designed, and for longer than it was ever intended.
I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.
I recognize these things — and I choose to be rugged.

The Rugged DevOps approach was developed to address problems in the traditional delivery method, which handled security by finding problems in the existing software, reporting them, and fixing them. As production releases come to market with ever increasing speed, this system quickly gets overwhelming, and organizations often resort to building out compliance systems that slow development to a crawl.

The rugged approach inverts that model, fixing the slowdown effect of applying security as an afterthought by attacking security at every level of development. Dan Glass, the chief information security officer (CISO) at American Airlines, outlines his company’s approach to using Rugged DevOps to improve and streamline delivery. Their R.O.A.D approach keys on four focus areas.

Rugged Systems. American Airlines (AA) builds security in at all development stages, resulting in systems that resilient, adaptable, and repeatedly tested.

Operational Excellence. The AA culture produces teams that build reliable, sustainable, and fast software, with each team owning quality control near the source and empowered to make improvements.

Actionable Intelligence. Glass stresses the need for telemetry that efficiently processes logs and reveals data that is correct, meaningful and relevant. This data is communicated to teams in as close to real-time as possible, empowering them to address issues.

Defensible Platforms. AA develops and maintains environments that are hardened and capable of surviving sustained attacks. Security teams are involved at every step of the R.O.A.D process, ensuring that adequate defenses are in the software’s DNA.

The Rugged DevOps approach focuses on security through every stage of the development and delivery process, resulting in systems that can endure the rigors of a production environment full of potential hostility. But Rugged isn’t a stand-alone approach to safety. It overlaps with the emerging trend of DevSecOps, which takes a similar approach to securing and hardening applications from inception forward.

What Is DevSecOps?

Wrap security into every step of development to safely deliver product

DevSecOps is the new philosophy of completely integrating security into the DevOps process. It calls for previously unprecedented collaboration between release engineers and security teams, resulting in a ‘Security as Code’ culture. From the DevSecOps Manifesto:

“By developing security as code, we will strive to create awesome products and services, provide insights directly to developers, and generally favor iteration over trying to always come up with the best answer before a deployment. We will operate like developers to make security and compliance available to be consumed as services. We will unlock and unblock new paths to help others see their ideas become a reality.”

The DevSecOps movement, like DevOps itself, is aimed at bringing new, clearer thinking to processes that tend to bog down in their own complexity. It is a natural and necessary response to the bottleneck effect of older security models on modern, continuous delivery cycles, but requires new attitudes and a shuffling of old teams.

What Is the ‘Sec’ in ‘DevSecOps’?

SecOps, short for Security Operations, is an approach for bridging the traditional gaps between IT security and operations teams in an effort to break silo thinking and speed safe delivery. The emerging practice requires a sea change in cultures where these departments were separate, if not frequently at odds. SecOps builds bridges of shared ownership and responsibility for the secure delivery process, eliminating communications and bureaucratic barriers.

Tools and processes are powerful, but it’s people who deliver constant security

Rugged DevOps and DevSecOps: The Shift to Continuous Security

Rugged DevOps and DevSecOps may sound like the latest tech industry buzz phrases, but they are critical considerations in contemporary business. In a market where software can change and respond to customers’ needs multiple times per day, old security models do not work. Potential is lost behind fear of flaws, stifling the continuous delivery process. This cultural shift is helping organizations address security in a continuous delivery environment.

In this great resource from White Hat Security, the authors outline seven habits for successfully adopting Rugged DevOps:

1. Increase Trust And Transparency Between Dev, Sec, And Ops.
2. Understand The Probability And Impact Of Specific Risks
3. Discard Detailed Security Road Maps In Favor Of Incremental Improvements
4. Use The Continuous Delivery Pipeline To Incrementally Improve Security Practices
5. Standardize Third-Party Software And Then Keep Current
6. Govern With Automated Audit Trails
7. Test Preparedness With Security Games

By incorporating these practices, organizations can deliver better product faster, find and fix problems more efficiently, and automated audit trails to take master-level control of your Rugged DevOps environment.

Get More Help

For more resources and the help you need to help your teams embrace and manage security and safely steer the wild new waters of continuous delivery. To learn more, contact us today!

 

Keep on playing !

Posted in BoulotCommentaires fermés sur Is Rugged DevOps the new buzzword ?

OVH VPS Testing Platform…

Victime de son succès mon hackMeIfYouCan a tenu une bonne dizaine de mois. On va essayer de faire mieux 🙂

Tomcat 9.0.4 : http://vps305886.ovh.net:8080/

Apache2 over Debian : http://vps305886.ovh.net/

Posted in Boulot, ClicCommentaires fermés sur OVH VPS Testing Platform…

CORiIn 2018

09h30 – 10h30 – Accueil – Café

10h30 – 10h50 – Introduction, Éric Freyssinet

250 pax cette année. amphi complet, changement d’ami l’an prochain

Environ 1/3 de résents ne vont pas au FIC, environ 10% son des locaux

10h50 – 11h30 – L’investigation numérique saisie par le droit des données personnelles, Eve Matringe

Première intervention par pour des regards croisés loi/judiciaire et technique

RGPD : « Règlement n°2016/679 du Parlement Européen et du Conseil du 27 avril 2016 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel et à la libre circulation de ces données »

+ Directive 2016/680Directive (UE) 2016/680 du Parlement européen et du Conseil du 27 avril 2016 relative à la protection des personnes physiques à l’égard du traitement des données à caractère personnel par les autorités compétentes à des fins de prévention et de détection des infractions pénales, d’enquêtes et de poursuites en la matière ou d’exécution de sanctions pénales, et à la libre circulation de ces données, et abrogeant la décision-cadre 2008/977/JAI du Conseil

 

11h30 – 12h10 – Full packet capture for the masses, Xavier Mertens – @XMe

Moloch = Full Packet Capture Framework : https://github.com/aol/moloch

les sondes (sensors) sont installées sur les serveurs dans un docker et envoient leur capture via cron over ssh

#balancetonpcap

12h10 – 12h50 – Analyse des jobs BITS, Morgane Celton et Morgan Delahaye (ANSSI)

https://github.com/ANSSI-FR/bits_parser (bientôt)

12h50 – 14h00 – Pause déjeuner

14h00 – 14h40 – CCleaner, Paul Rascagnères

14h40 – 15h20 – Retour d’expérience – Wannacry & NotPetya, Quentin Perceval et Vincent Nguyen (CERT-W)

15h20 – 16h00 – Pause Café

16h00 – 16h40 – Comment ne pas communiquer en temps de crise : une perspective utile pour la gestion d’incident cybersécurité, Rayna Stamboliyska

16h40 – 17h20 – Wannacry, NotPetya, Bad Rabbit: De l’autre coté du miroir, Sébastien Larinier

17h20 – 18h00 – Forensic Analysis in IoT, François Bouchaud

18h00 – Mot de clôture

Posted in BoulotCommentaires fermés sur CORiIn 2018

Les 10 définitions clés d’Amazon Web Services

Amazon Web Services (AWS)
Amazon Web Services (AWS) est une plateforme évolutive complète de Cloud computing proposée par Amazon.com.
IaaS
L’Infrastructure à la demande, ou IaaS (Infrastructure as a Service), est un type d’informatique en mode Cloud qui fournit des ressources informatiques virtualisées via Internet. Avec les applications et les plate-formes à la demande – respectivement SaaS et PaaS – le modèle IaaS compte parmi les trois principales catégories de services Cloud.EC2
Une instance EC2 est un serveur virtuel hébergé dans Elastic Compute Cloud (EC2) pour exécuter des applications sur l’infrastructure Amazon Web Services (AWS).S3
Amazon Simple Storage Service (Amazon S3) est un service web de stockage, évolutif, conçu pour la sauvegarde et l’archivage en ligne des données et des programmes d’application.

AWS Lambda
AWS Lambda est un service en Cloud basé sur les événements, proposé par Amazon Web Services. Il permet aux développeurs de provisionner des ressources pour une fonction de programmation et de les payer à la consommation, sans se soucier de la quantité de ressources de calcul ou de stockage Amazon nécessaire.

DWaaS (Entrepôt de données Cloud)
L’entrepôt de données à la demande, en mode Cloud est un modèle d’externalisation dans lequel un prestataire de services configure et gère les ressources matérielles et logicielles requises par à un entrepôt de données (en anglais, le Data Warehouse), tandis que le client fournit les données et paie pour le service d’infogérance.

Amazon Aurora
Amazon Aurora est un moteur de base de données relationnelle d’Amazon Web Services (AWS) compatible avec MySQL. Il permet d’utiliser le code, les applications et les pilotes des bases de données MySQL dans Aurora avec peu, voire aucune adaptation.

AWS CloudTrail
Offert par Amazon Web Services (AWS), AWS CloudTrail est un service Web qui enregistre les appels passés par interface de programmation (API) et surveille les logs.

 

Amazon ElasticSearch
Amazon Elasticsearch Service (Amazon ES) permet aux développeurs de lancer et d’exploiter ElastiSearch – le moteur open source de recherche et d’analyse basé sur Java – dans le Cloud d’AWS. Ils peuvent utiliser ElasticSearch pour suivre les applications en temps réel, étudier les logs et analyser les parcours de navigation.Amazon Virtual Private Cloud
Amazon Virtual Private Cloud (Amazon VPC) permet à un développeur de créer un réseau virtuel pour des ressources isolée du Cloud Amazon Web Services.

 

Posted in BoulotCommentaires fermés sur Les 10 définitions clés d’Amazon Web Services

SIEM (cheat sheet)

In the field of computer security, security information and event management (SIEM) software products and services combine security information management (SIM) and security event management (SEM). They provide real-time analysis of security alerts generated by applications and network hardware.

Vendors sell SIEM as on-premise software or appliances but also as managed services, or cloud-based instances; these products are also used to log security data and generate reports for compliance purposes.[1]

Overview

The acronyms SEM, SIM and SIEM have been sometimes used interchangeably.[2]

The segment of security management that deals with real-time monitoring, correlation of events, notifications and console views is known as security event management (SEM).

The second area provides long-term storage as well as analysis, manipulation and reporting of log data and security records of the type collated by SEM software, and is known as security information management (SIM).[3]

As with many meanings and definitions of capabilities, evolving requirements continually shape derivatives of SIEM product-categories. Organizations are turning to big data platforms, such as Apache Hadoop, to complement SIEM capabilities by extending data storage capacity and analytic flexibility.[4][5]

Advanced SIEMs have evolved to include user and entity behavior analytics (UEBA) and security orchestration and automated response (SOAR).

The term security information event management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005 describes,[6]

  • the product capabilities of gathering, analyzing and presenting information from network and security devices
  • identity and access-management applications
  • vulnerability management and policy-compliance tools
  • operating-system, database and application logs
  • external threat data

A key focus is to monitor and help manage user and service privileges, directory services and other system-configuration changes; as well as providing log auditing and review and incident response.[3]

Capabilities/Components

  • Data aggregation: Log management aggregates data from many sources, including network, security, servers, databases, applications, providing the ability to consolidate monitored data to help avoid missing crucial events.
  • Correlation: looks for common attributes, and links events together into meaningful bundles. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information. Correlation is typically a function of the Security Event Management portion of a full SIEM solution[7]
  • Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of immediate issues. Alerting can be to a dashboard, or sent via third party channels such as email.
  • Dashboards: Tools can take event data and turn it into informational charts to assist in seeing patterns, or identifying activity that is not forming a standard pattern.[8]
  • Compliance: Applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance and auditing processes.[9]
  • Retention: employing long-term storage of historical data to facilitate correlation of data over time, and to provide the retention necessary for compliance requirements. Long term log data retention is critical in forensic investigations as it is unlikely that discovery of a network breach will be at the time of the breach occurring.[10]
  • Forensic analysis: The ability to search across logs on different nodes and time periods based on specific criteria. This mitigates having to aggregate log information in your head or having to search through thousands and thousands of logs.[9]

Usage cases

Computer security researcher Chris Kubecka at the hacking conference 28C3 Chaos Communication Congress successful SIEM use cases.[11]

  • SIEM visibility and anomaly detection could help detect Zero-day (computing) or Computer_virus#Polymorphic_code. Primarily due to low rates of anti-virus detection rates against this type of rapidly changing type of malware.
  • Automatic parsing, log normalization and categorization can occur automatically. Regardless of the type of computer or network device as long as it can send a log.
  • Visualization with a SIEM using security events and log failures can aid in pattern detection.
  • Protocol anomalies which can indicate a mis-configuration or a security issue can be identified with a SIEM using pattern detection, alerting, baseline and dashboards.
  • SIEMS can detect covert, malicious communications and encrypted channels.
  • Cyberwarfare can be detected by SIEMs with accuracy, discovering both attackers and victims.

Today, most SIEM systems work by deploying multiple collection agents in a hierarchical manner to gather security-related events from end-user devices, servers, network equipment, as well as specialized security equipment like firewalls, antivirus or intrusion prevention systems. The collectors forward events to a centralized management console where security analysts sift through the noise, connecting the dots and prioritizing security incidents.

In some systems, pre-processing may happen at edge collectors, with only certain events being passed through to a centralized management node. In this way, the volume of information being communicated and stored can be reduced. Although advancements in machine learning are helping systems to flag anomalies more accurately, analysts must still provide feedback, continuously educating the system about the environment.

Here are some of the most important features to review when evaluating SIEM products:

  • Integration with other controls – Can the system give commands to other enterprise security controls to prevent or stop attacks in progress?
  • Artificial intelligence – Can the system improve its own accuracy by through machine and deep learning?
  • Threat intelligence feeds – Can the system support threat intelligence feeds of the organization’s choosing or is it mandated to use a particular feed?
  • Robust compliance reporting – Does the system include built-in reports for common compliance needs and the provide the organization with the ability to customize or create new reports?
  • Forensics capabilities – Can the system capture additional information about security events by recording the headers and contents of packets of interest?

Pronunciation

The SIEM acronym is alternately pronounced SEEM or SIM (with a silent e).

 

Posted in BoulotCommentaires fermés sur SIEM (cheat sheet)

A lire / à écouter / à regarder

14/12/2017

France Culture : L’Invité des Matins (2ème partie) par Guillaume Erner  (24min)
Neutralité du net, hégémonie des GAFA : la démocratie prise dans la toile (2ème partie)

Avec Benjamin Bayart et Sébastien Soriano

https://www.franceculture.fr/emissions/linvite-des-matins-2eme-partie/neutralite-du-net-hegemonie-des-gafa-la-democratie-prise-dans-la-toile-2eme-partie

Podcast France Culture

Podcast France Culture

Posted in Boulot, ClicCommentaires fermés sur A lire / à écouter / à regarder

Kubernetes (notes VT)

 

Kubernetes

Kubernetes

Kubernetes is Google’s open source system for managing Linux containers across private, public and hybrid cloud environments.

<wikipedia> Kubernetes (commonly referred to as « K8s ») is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation. It aims to provide a « platform for automating deployment, scaling, and operations of application containers across clusters of hosts ». It supports a range of container tools, including Docker.</wikipedia>

Kubernetes automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. Kubernetes contains tools for orchestration, service discovery and load balancing that can be used with Docker and Rocket containers. As needs change, a developer can move container workloads in Kubernetes to another cloud provider without changing the code.

With Kubernetes, containers run in pods. A pod is a basic unit that hosts one or multiple containers, which share resources and are located on the same physical or virtual machine. For each pod, Kubernetes finds a machine with enough compute capacity and launches the associated containers. A node agent, called a Kubelet, manages pods, their containers and their images. Kubelets also automatically restart a container if it fails.

Other core components of Kubernetes include:

  • Master: Runs the Kubernetes API and controls the cluster.
  • Label: A key/value pair used for service discovery. A label tags the containers and links them together into groups.
  • Replication Controller: Ensures that the requested numbers of pods are running to user’s specifications. This is what scales containers horizontally, ensuring there are more or fewer containers to meet the overall application’s computing needs.
  • Service: An automatically configured load balancer and integrator that runs across the cluster.

Containerization is an approach to virtualization in which the virtualization layer runs as an application on top of a common, shared operating system. As an alternative, containers can also run on an OS that’s installed into a conventional virtual machine running on a hypervisor.

Containers are portable across different on-premises and cloud platforms, making them suitable for applications that need to run across various computing environments.

Kubernetes is mainly used by application developers and IT system administrators. A comparable tool to Kubernetes is Docker Swarm, which offers native clustering capabilities.

Posted in BoulotCommentaires fermés sur Kubernetes (notes VT)