Archive | Boulot

Is Rugged DevOps the new buzzword ?

Read today on WhatIs.com :

« Rugged DevOps is an approach to software development that places a priority on making sure code is secure before it gets to production. Rugged DevOps takes the lean thinking and Agile mindset that DevOps embraces and applies it to « ruggedizing » software and making sure that security is not a post-development consideration. Rugged DevOps is often used in software development for cloud environments.

The approach requires programmers and operations team members to possess a high degree of security awareness and have the ability to automate testing throughout the software development lifecycle. Despite a large percentage of the IT industry adopting agile and DevOps processes, security testing cycles are still often based on the traditional and cumbersome waterfall approach. This means many organizations forget to do security qualifications tests, such as PCI compliance checks and risk assessments, until it’s almost too late.

To sync security with DevOps cycles, a rugged DevOps team must log integration and delivery processes at a very granular level, so security issues can be identified as they arise. The more granular the records are, the easier it becomes to identify security holes. Both Jira and Cucumber are popular tools for keeping logs in rugged DevOps environments.

For automated DevOps and security testing, there’s a large portfolio of product types, including static application security testing, dynamic application security testing, interactive application security testing and runtime application security testing. Vendors include Contrast Security, Fortify, Veracode and Waratek »

That beein said, what’s the Difference Rugged DevOps and DevSecOps ?

Here is an answer from Sumologic :

In traditional software development environments, security has always been considered a separate aspect – even an afterthought – but now the two practices have emerged to produce safer software in the form of Rugged DevOps and DevSecOps.

A Rugged approach to development and deployment produces applications that stand up to the rockiest tests

Rugged DevOps is an emerging trend that emphasizes a security first approach to every phase of software development. DevSecOps, which combines traditional DevOps approaches with more a more integrated and robust approach to security. These approaches are not mutually exclusive, and take slightly different paths toward the same goal of shifting security leftward and continually focusing on it through the production pipeline.

As today’s environments evolve toward continuous delivery models that can see multiple production releases per day, any miscalculation or error in security can clog the production pipeline. Below is a look at how both Rugged DevOps and DevSecOps approaches can help your organization achieve state of the art design security.

What Is Rugged DevOps?

Rugged DevOps takes the traditional view of security teams as an obstacle and turns it upside down, engineering security into all aspects of design and deployment. Instead of security playing the role of traffic cop slowing down progress, a Rugged DevOps approach makes security a kind of police escort, helping the delivery process proceed with speed and safety.

Rugged DevOps starts with creating secure code. In traditional models code is developed, then penetration testing and automated tools are used to deem the software ‘safe.’ Secure code development involves a different approach, where previously separate teams (development, Q/A, testing, etc.) interact throughout the entire software lifecycle, addressing not just security holes but industry trends and other factors to develop ‘defensible’ code through communication, collaboration, and competition.

The key components of a successful rugged development culture are outlined in the Rugged Manifesto, the definitive document on the subject:

I am rugged and, more importantly, my code is rugged.
I recognize that software has become a foundation of our modern world.
I recognize the awesome responsibility that comes with this foundational role.
I recognize that my code will be used in ways I cannot anticipate, in ways it was not designed, and for longer than it was ever intended.
I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.
I recognize these things — and I choose to be rugged.

The Rugged DevOps approach was developed to address problems in the traditional delivery method, which handled security by finding problems in the existing software, reporting them, and fixing them. As production releases come to market with ever increasing speed, this system quickly gets overwhelming, and organizations often resort to building out compliance systems that slow development to a crawl.

The rugged approach inverts that model, fixing the slowdown effect of applying security as an afterthought by attacking security at every level of development. Dan Glass, the chief information security officer (CISO) at American Airlines, outlines his company’s approach to using Rugged DevOps to improve and streamline delivery. Their R.O.A.D approach keys on four focus areas.

Rugged Systems. American Airlines (AA) builds security in at all development stages, resulting in systems that resilient, adaptable, and repeatedly tested.

Operational Excellence. The AA culture produces teams that build reliable, sustainable, and fast software, with each team owning quality control near the source and empowered to make improvements.

Actionable Intelligence. Glass stresses the need for telemetry that efficiently processes logs and reveals data that is correct, meaningful and relevant. This data is communicated to teams in as close to real-time as possible, empowering them to address issues.

Defensible Platforms. AA develops and maintains environments that are hardened and capable of surviving sustained attacks. Security teams are involved at every step of the R.O.A.D process, ensuring that adequate defenses are in the software’s DNA.

The Rugged DevOps approach focuses on security through every stage of the development and delivery process, resulting in systems that can endure the rigors of a production environment full of potential hostility. But Rugged isn’t a stand-alone approach to safety. It overlaps with the emerging trend of DevSecOps, which takes a similar approach to securing and hardening applications from inception forward.

What Is DevSecOps?

Wrap security into every step of development to safely deliver product

DevSecOps is the new philosophy of completely integrating security into the DevOps process. It calls for previously unprecedented collaboration between release engineers and security teams, resulting in a ‘Security as Code’ culture. From the DevSecOps Manifesto:

“By developing security as code, we will strive to create awesome products and services, provide insights directly to developers, and generally favor iteration over trying to always come up with the best answer before a deployment. We will operate like developers to make security and compliance available to be consumed as services. We will unlock and unblock new paths to help others see their ideas become a reality.”

The DevSecOps movement, like DevOps itself, is aimed at bringing new, clearer thinking to processes that tend to bog down in their own complexity. It is a natural and necessary response to the bottleneck effect of older security models on modern, continuous delivery cycles, but requires new attitudes and a shuffling of old teams.

What Is the ‘Sec’ in ‘DevSecOps’?

SecOps, short for Security Operations, is an approach for bridging the traditional gaps between IT security and operations teams in an effort to break silo thinking and speed safe delivery. The emerging practice requires a sea change in cultures where these departments were separate, if not frequently at odds. SecOps builds bridges of shared ownership and responsibility for the secure delivery process, eliminating communications and bureaucratic barriers.

Tools and processes are powerful, but it’s people who deliver constant security

Rugged DevOps and DevSecOps: The Shift to Continuous Security

Rugged DevOps and DevSecOps may sound like the latest tech industry buzz phrases, but they are critical considerations in contemporary business. In a market where software can change and respond to customers’ needs multiple times per day, old security models do not work. Potential is lost behind fear of flaws, stifling the continuous delivery process. This cultural shift is helping organizations address security in a continuous delivery environment.

In this great resource from White Hat Security, the authors outline seven habits for successfully adopting Rugged DevOps:

1. Increase Trust And Transparency Between Dev, Sec, And Ops.
2. Understand The Probability And Impact Of Specific Risks
3. Discard Detailed Security Road Maps In Favor Of Incremental Improvements
4. Use The Continuous Delivery Pipeline To Incrementally Improve Security Practices
5. Standardize Third-Party Software And Then Keep Current
6. Govern With Automated Audit Trails
7. Test Preparedness With Security Games

By incorporating these practices, organizations can deliver better product faster, find and fix problems more efficiently, and automated audit trails to take master-level control of your Rugged DevOps environment.

Get More Help

For more resources and the help you need to help your teams embrace and manage security and safely steer the wild new waters of continuous delivery. To learn more, contact us today!

 

Keep on playing !

Posted in Boulot0 commentaire

OVH VPS Testing Platform…

Victime de son succès mon hackMeIfYouCan a tenu une bonne dizaine de mois. On va essayer de faire mieux 🙂

Tomcat 9.0.4 : http://vps305886.ovh.net:8080/

Apache2 over Debian : http://vps305886.ovh.net/

Posted in Boulot, Clic0 commentaire

CORiIn 2018

09h30 – 10h30 – Accueil – Café

10h30 – 10h50 – Introduction, Éric Freyssinet

250 pax cette année. amphi complet, changement d’ami l’an prochain

Environ 1/3 de résents ne vont pas au FIC, environ 10% son des locaux

10h50 – 11h30 – L’investigation numérique saisie par le droit des données personnelles, Eve Matringe

Première intervention par pour des regards croisés loi/judiciaire et technique

RGPD : « Règlement n°2016/679 du Parlement Européen et du Conseil du 27 avril 2016 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel et à la libre circulation de ces données »

+ Directive 2016/680Directive (UE) 2016/680 du Parlement européen et du Conseil du 27 avril 2016 relative à la protection des personnes physiques à l’égard du traitement des données à caractère personnel par les autorités compétentes à des fins de prévention et de détection des infractions pénales, d’enquêtes et de poursuites en la matière ou d’exécution de sanctions pénales, et à la libre circulation de ces données, et abrogeant la décision-cadre 2008/977/JAI du Conseil

 

11h30 – 12h10 – Full packet capture for the masses, Xavier Mertens – @XMe

Moloch = Full Packet Capture Framework : https://github.com/aol/moloch

les sondes (sensors) sont installées sur les serveurs dans un docker et envoient leur capture via cron over ssh

#balancetonpcap

12h10 – 12h50 – Analyse des jobs BITS, Morgane Celton et Morgan Delahaye (ANSSI)

https://github.com/ANSSI-FR/bits_parser (bientôt)

12h50 – 14h00 – Pause déjeuner

14h00 – 14h40 – CCleaner, Paul Rascagnères

14h40 – 15h20 – Retour d’expérience – Wannacry & NotPetya, Quentin Perceval et Vincent Nguyen (CERT-W)

15h20 – 16h00 – Pause Café

16h00 – 16h40 – Comment ne pas communiquer en temps de crise : une perspective utile pour la gestion d’incident cybersécurité, Rayna Stamboliyska

16h40 – 17h20 – Wannacry, NotPetya, Bad Rabbit: De l’autre coté du miroir, Sébastien Larinier

17h20 – 18h00 – Forensic Analysis in IoT, François Bouchaud

18h00 – Mot de clôture

Posted in BoulotCommentaires fermés sur CORiIn 2018

Les 10 définitions clés d’Amazon Web Services

Amazon Web Services (AWS)
Amazon Web Services (AWS) est une plateforme évolutive complète de Cloud computing proposée par Amazon.com.
IaaS
L’Infrastructure à la demande, ou IaaS (Infrastructure as a Service), est un type d’informatique en mode Cloud qui fournit des ressources informatiques virtualisées via Internet. Avec les applications et les plate-formes à la demande – respectivement SaaS et PaaS – le modèle IaaS compte parmi les trois principales catégories de services Cloud.EC2
Une instance EC2 est un serveur virtuel hébergé dans Elastic Compute Cloud (EC2) pour exécuter des applications sur l’infrastructure Amazon Web Services (AWS).S3
Amazon Simple Storage Service (Amazon S3) est un service web de stockage, évolutif, conçu pour la sauvegarde et l’archivage en ligne des données et des programmes d’application.

AWS Lambda
AWS Lambda est un service en Cloud basé sur les événements, proposé par Amazon Web Services. Il permet aux développeurs de provisionner des ressources pour une fonction de programmation et de les payer à la consommation, sans se soucier de la quantité de ressources de calcul ou de stockage Amazon nécessaire.

DWaaS (Entrepôt de données Cloud)
L’entrepôt de données à la demande, en mode Cloud est un modèle d’externalisation dans lequel un prestataire de services configure et gère les ressources matérielles et logicielles requises par à un entrepôt de données (en anglais, le Data Warehouse), tandis que le client fournit les données et paie pour le service d’infogérance.

Amazon Aurora
Amazon Aurora est un moteur de base de données relationnelle d’Amazon Web Services (AWS) compatible avec MySQL. Il permet d’utiliser le code, les applications et les pilotes des bases de données MySQL dans Aurora avec peu, voire aucune adaptation.

AWS CloudTrail
Offert par Amazon Web Services (AWS), AWS CloudTrail est un service Web qui enregistre les appels passés par interface de programmation (API) et surveille les logs.

 

Amazon ElasticSearch
Amazon Elasticsearch Service (Amazon ES) permet aux développeurs de lancer et d’exploiter ElastiSearch – le moteur open source de recherche et d’analyse basé sur Java – dans le Cloud d’AWS. Ils peuvent utiliser ElasticSearch pour suivre les applications en temps réel, étudier les logs et analyser les parcours de navigation.Amazon Virtual Private Cloud
Amazon Virtual Private Cloud (Amazon VPC) permet à un développeur de créer un réseau virtuel pour des ressources isolée du Cloud Amazon Web Services.

 

Posted in BoulotCommentaires fermés sur Les 10 définitions clés d’Amazon Web Services

SIEM (cheat sheet)

In the field of computer security, security information and event management (SIEM) software products and services combine security information management (SIM) and security event management (SEM). They provide real-time analysis of security alerts generated by applications and network hardware.

Vendors sell SIEM as on-premise software or appliances but also as managed services, or cloud-based instances; these products are also used to log security data and generate reports for compliance purposes.[1]

Overview

The acronyms SEM, SIM and SIEM have been sometimes used interchangeably.[2]

The segment of security management that deals with real-time monitoring, correlation of events, notifications and console views is known as security event management (SEM).

The second area provides long-term storage as well as analysis, manipulation and reporting of log data and security records of the type collated by SEM software, and is known as security information management (SIM).[3]

As with many meanings and definitions of capabilities, evolving requirements continually shape derivatives of SIEM product-categories. Organizations are turning to big data platforms, such as Apache Hadoop, to complement SIEM capabilities by extending data storage capacity and analytic flexibility.[4][5]

Advanced SIEMs have evolved to include user and entity behavior analytics (UEBA) and security orchestration and automated response (SOAR).

The term security information event management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005 describes,[6]

  • the product capabilities of gathering, analyzing and presenting information from network and security devices
  • identity and access-management applications
  • vulnerability management and policy-compliance tools
  • operating-system, database and application logs
  • external threat data

A key focus is to monitor and help manage user and service privileges, directory services and other system-configuration changes; as well as providing log auditing and review and incident response.[3]

Capabilities/Components

  • Data aggregation: Log management aggregates data from many sources, including network, security, servers, databases, applications, providing the ability to consolidate monitored data to help avoid missing crucial events.
  • Correlation: looks for common attributes, and links events together into meaningful bundles. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information. Correlation is typically a function of the Security Event Management portion of a full SIEM solution[7]
  • Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of immediate issues. Alerting can be to a dashboard, or sent via third party channels such as email.
  • Dashboards: Tools can take event data and turn it into informational charts to assist in seeing patterns, or identifying activity that is not forming a standard pattern.[8]
  • Compliance: Applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance and auditing processes.[9]
  • Retention: employing long-term storage of historical data to facilitate correlation of data over time, and to provide the retention necessary for compliance requirements. Long term log data retention is critical in forensic investigations as it is unlikely that discovery of a network breach will be at the time of the breach occurring.[10]
  • Forensic analysis: The ability to search across logs on different nodes and time periods based on specific criteria. This mitigates having to aggregate log information in your head or having to search through thousands and thousands of logs.[9]

Usage cases

Computer security researcher Chris Kubecka at the hacking conference 28C3 Chaos Communication Congress successful SIEM use cases.[11]

  • SIEM visibility and anomaly detection could help detect Zero-day (computing) or Computer_virus#Polymorphic_code. Primarily due to low rates of anti-virus detection rates against this type of rapidly changing type of malware.
  • Automatic parsing, log normalization and categorization can occur automatically. Regardless of the type of computer or network device as long as it can send a log.
  • Visualization with a SIEM using security events and log failures can aid in pattern detection.
  • Protocol anomalies which can indicate a mis-configuration or a security issue can be identified with a SIEM using pattern detection, alerting, baseline and dashboards.
  • SIEMS can detect covert, malicious communications and encrypted channels.
  • Cyberwarfare can be detected by SIEMs with accuracy, discovering both attackers and victims.

Today, most SIEM systems work by deploying multiple collection agents in a hierarchical manner to gather security-related events from end-user devices, servers, network equipment, as well as specialized security equipment like firewalls, antivirus or intrusion prevention systems. The collectors forward events to a centralized management console where security analysts sift through the noise, connecting the dots and prioritizing security incidents.

In some systems, pre-processing may happen at edge collectors, with only certain events being passed through to a centralized management node. In this way, the volume of information being communicated and stored can be reduced. Although advancements in machine learning are helping systems to flag anomalies more accurately, analysts must still provide feedback, continuously educating the system about the environment.

Here are some of the most important features to review when evaluating SIEM products:

  • Integration with other controls – Can the system give commands to other enterprise security controls to prevent or stop attacks in progress?
  • Artificial intelligence – Can the system improve its own accuracy by through machine and deep learning?
  • Threat intelligence feeds – Can the system support threat intelligence feeds of the organization’s choosing or is it mandated to use a particular feed?
  • Robust compliance reporting – Does the system include built-in reports for common compliance needs and the provide the organization with the ability to customize or create new reports?
  • Forensics capabilities – Can the system capture additional information about security events by recording the headers and contents of packets of interest?

Pronunciation

The SIEM acronym is alternately pronounced SEEM or SIM (with a silent e).

 

Posted in BoulotCommentaires fermés sur SIEM (cheat sheet)

A lire / à écouter / à regarder

14/12/2017

France Culture : L’Invité des Matins (2ème partie) par Guillaume Erner  (24min)
Neutralité du net, hégémonie des GAFA : la démocratie prise dans la toile (2ème partie)

Avec Benjamin Bayart et Sébastien Soriano

https://www.franceculture.fr/emissions/linvite-des-matins-2eme-partie/neutralite-du-net-hegemonie-des-gafa-la-democratie-prise-dans-la-toile-2eme-partie

Podcast France Culture

Podcast France Culture

Posted in Boulot, ClicCommentaires fermés sur A lire / à écouter / à regarder

Kubernetes (notes VT)

 

Kubernetes

Kubernetes

Kubernetes is Google’s open source system for managing Linux containers across private, public and hybrid cloud environments.

<wikipedia> Kubernetes (commonly referred to as « K8s ») is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation. It aims to provide a « platform for automating deployment, scaling, and operations of application containers across clusters of hosts ». It supports a range of container tools, including Docker.</wikipedia>

Kubernetes automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. Kubernetes contains tools for orchestration, service discovery and load balancing that can be used with Docker and Rocket containers. As needs change, a developer can move container workloads in Kubernetes to another cloud provider without changing the code.

With Kubernetes, containers run in pods. A pod is a basic unit that hosts one or multiple containers, which share resources and are located on the same physical or virtual machine. For each pod, Kubernetes finds a machine with enough compute capacity and launches the associated containers. A node agent, called a Kubelet, manages pods, their containers and their images. Kubelets also automatically restart a container if it fails.

Other core components of Kubernetes include:

  • Master: Runs the Kubernetes API and controls the cluster.
  • Label: A key/value pair used for service discovery. A label tags the containers and links them together into groups.
  • Replication Controller: Ensures that the requested numbers of pods are running to user’s specifications. This is what scales containers horizontally, ensuring there are more or fewer containers to meet the overall application’s computing needs.
  • Service: An automatically configured load balancer and integrator that runs across the cluster.

Containerization is an approach to virtualization in which the virtualization layer runs as an application on top of a common, shared operating system. As an alternative, containers can also run on an OS that’s installed into a conventional virtual machine running on a hypervisor.

Containers are portable across different on-premises and cloud platforms, making them suitable for applications that need to run across various computing environments.

Kubernetes is mainly used by application developers and IT system administrators. A comparable tool to Kubernetes is Docker Swarm, which offers native clustering capabilities.

Posted in BoulotCommentaires fermés sur Kubernetes (notes VT)

How do I configure a Splunk Forwarder on Linux?


From Splunk Command Line Reference:

http://docs.splunk.com/Documentation/Splunk/latest/Admin/AccessandusetheCLIonaremoteserver

Note: the CLI may ask you to authenticate – it’s asking for the LOCAL credentials, so if you haven’t changed the admin password on the forwarder, you should use admin/changeme

Steps for Installing/Configuring Linux forwarders:

Step 1: Download Splunk Universal Forwarder: http://www.splunk.com/download/universalforwarder (64bit package if applicable!). You will have to create an account to download any piece of Splunk software

Step 2: Install Forwarder

tar -xvf splunkforwarder-6.6.3-e21ee54bc796-Linux-x86_64.tgz -C /opt

It will install the splunk code in /opt/splunforwarder directory

Step 3: Enable boot-start/init script:

/opt/splunkforwarder/bin/splunk enable boot-start

(start splunk: /opt/splunkforwarder/splunk start)

Step 4: Enable Receiving input on the Index Server

Configure the Splunk Index Server to receive data, either in the manager:

  • using the web GUI : Manager -> sending and receiving -> configure receiving -> new
  • using the CLI: /opt/splunk/bin/splunk enable listen 9997
Enable receiving on Iddexer

Enable receiving on Iddexer

Where 9997 (default) is the receiving port for Splunk Forwarder connections

Step 5: Configure Forwarder connection to Index Server:

/opt/splunkforwarder/bin/splunk add forward-server hostname.domain:9997

(where hostname.domain is the fully qualified address or IP of the index server (like indexer.splunk.com), and 9997 is the receiving port you create on the Indexer

Step 6: Test Forwarder connection:

/opt/splunkforwarder/bin/splunk list forward-server

Step 7: Add Data:

/opt/splunkforwarder/bin/splunk add monitor /path/to/app/logs/ -index main -sourcetype %app%

Where

/path/to/app/logs/ is the path to application logs on the host that you want to bring into Splunk,
%app% is the name you want to associate with that type of data

This will create a file: inputs.conf in /opt/splunkforwarder/etc/apps/search/local/

— here is some documentation on inputs.conf: http://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf

Note: System logs in /var/log/ are covered in the configuration part of Step 7. If you have application logs in /var/log/*/

Step 8 (Optional): Install and Configure UNIX app on Indexer and nix forwarders:

On the Splunk Indexer, go to Apps -> Manage Apps -> Find more Apps Online -> Search for ‘Splunk App for Unix and Linux’ -> Install the « Splunk App for Unix and Linux’ Restart Splunk if prompted, Open UNIX app -> Configure

Once you’ve configured the UNIX app on the server, you’ll want to install the related Add-on: « Splunk Add-on for Unix and Linux » on the Universal Forwarder.

Go to http://apps.splunk.com/ and find the « Splunk Add-on for Unix and Linux » (Note you want the ADD-ON, not the APP – there is a big difference!).

Copy the contents of the Add-On zip file to the Universal Forwarder, in: /opt/splunkforwarder/etc/apps/.

If done correctly, you will have the directory « /opt/splunkforwarder/etc/apps/Splunk_TA_nix » and inside it will be a few directories along with a README & license files.

Restart the Splunk forwarder (/opt/splunkforwarder/bin/splunk restart)

Note: The data collected by the unix app is by default placed into a separate index called ‘os’ so it will not be searchable within splunk unless you either go through the UNIX app, or include the following in your search query: “index=os” or “index=os OR index=main” (don’t paste doublequotes).

You also will have to install sysstat if you want to monitor your server resources.

Step 9 (Optional): Customize UNIX app configuration on forwarders:

Look at inputs.conf in /opt/splunkforwarder/etc/apps/unix/local/ and /opt/splunkforwarder/etc/apps/unix/default/ The ~default/inputs. path shows what the app can do, but everything is disabled.

The ~local/inputs.conf shows what has been enabled – if you want to change polling intervals or disable certain scripts, make the changes in ~local/inputs.conf.

Step 10 (Optional): Configure File System Change Monitoring (for configuration files): http://docs.splunk.com/Documentation/Splunk/4.3.2/Data/Monitorchangestoyourfilesystem

 

Note that Splunk also has a centralized configuration management server called Deployment Server. This can be used to define server classes and push out specific apps and configurations to those classes. So you may want to have your production servers class have the unix app configured to execute those scripts listed in ~local/inputs at the default values, but maybe your QA servers only need a few of the full stack, and at longer polling intervals.

Using Deployment Server, you can configure these classes, configure the app once centrally, and push the appropriate app/configuration to the right systems.

Enjoy !

Need Help troubleshooting ?

Do the same on Microsoft Windows Platform : click, click, click …

Splunk official how-to on that part: http://docs.splunk.com/Documentation/Splunk/6.2.3/Data/Useforwardingagentstogetdata

Posted in Boulot, SplunkCommentaires fermés sur How do I configure a Splunk Forwarder on Linux?

Eclipse

Reçu dans une feuille de choux numériques ces derniers jours (WhatIs@lists.techtarget.com) j’ai trouvé le contenu ci-dessous intéressant (histoire et perspectives de la plateforme). Je me permets donc de le reproduire (sans aucune autorisation).

eclipse

eclipse

Eclipse is a free, Java-based development platform known for its plug-ins that allow developers to develop and test code written in other programming languages. Eclipse is released under terms of the Eclipse Public License.

Eclipse got its start in 2001 when IBM donated three million lines of code from its Java tools to develop an open source integrated development environment (IDE). The IDE was initially overseen by a consortium of software vendors seeking to create and foster a new community that would complement Apache’s open source community. Rumor has it that the platform’s name was derived from a secondary goal, which was to eclipse Microsoft’s popular IDE, Visual Studio.

Today, Eclipse is managed by the Eclipse Foundation, a not-for-profit corporation whose strategic members include CA Technologies, IBM, Oracle and SAP. The foundation, which was created in 2004, supports Eclipse projects with a well-defined development process that values quality, application programming interface (API) stability and consistent release schedules. The foundation provides infrastructure and intellectual property (IP) management services to the Eclipse community and helps community members market and promote commercial software products that are based on Eclipse.

In 2016, Microsoft announced it would join the Eclipse Foundation and support the integration of Visual Studio by giving Eclipse developers full access to Visual Studio Team services. Oracle donated the Hudson continuous integration server it inherited from Sun Microsystems to Eclipse in 2011 and is expected to donate the Java 2 Platform, Enterprise Edition (Java EE) to Eclipse in the near future.

Site officiel : https://www.eclipse.org/home/index.php
Wikipedia : https://fr.wikipedia.org/wiki/Eclipse_(projet)

Posted in BoulotCommentaires fermés sur Eclipse

Playing with Splunk and REST API

SPLUNK and REST API

SPLUNK and REST API

How to Stream Twitter into Splunk in 10 Simple Steps ?

January 8, 2014/in Splunk /by Discovered Intelligence

My Original Tweet

My Original Tweet

Many people talk about the need to index tweets from twitter into Splunk, that I figured I would write a post to explain just how easy it is.

Within 10 steps and a few minutes, you will be streaming real-time tweets into Splunk, with the fields all extracted and the twitter data fully searchable.

Assumptions

    Splunk is installed and running.
    If you don’t have Splunk, you can download it from http://splunk.com/download.
    Splunk will run fine on your laptop for this exercise.
    You have a working Twitter account

The 10 Steps

1. Go to https://dev.twitter.com/ and log in with your twitter credentials

2. At the top right, click on “My applications”

3. Click on the “Create New App” button and complete the box for Name, Description and Website. You don’t need a callback URL for this exercise. Once you have completed these three fields, click on the “Create Your Twitter Application” button at the bottom of the screen.

4. Your application is now completed and we now need to generate the OAuth keys. You should see a series of tabs on the screen – click on the ‘API Keys’ tab. At the bottom of the screen when in the API Keys tab, click on the “Create my access token” button.

5. Wait about 30 seconds or so then click on the ‘Test OAuth‘ button at the top right of the screen. You should see all fields completed with cryptic codes. If you don’t, hit back, then click the ‘Test OAuth’ button again after another 30 seconds or so. Keep this page handy – we will need it in a couple of minutes.

6. Ok, now log into your Splunk environment search head, where we are going to install the free REST Api modular input application. Copy the following URL and replace mysplunkserver with whatever your splunk server name is, then click on the “Install Free” button.https://mysplunkserver:8000/en-US/manager/search/apps/remote?q=rest+api.If you are not using SSL, change it to http rather than https. You can alternatively install the application from the Splunk app store here: http://apps.splunk.com/app/1546/

7. Click on the button to “Restart Splunk” after installation of the app.

8. This app adds a new data input method to Splunk called REST. Once logged back into Splunk, click on “Settings” (top right) then “Data Inputs” from the Settings menu.

9.The Data Inputs screen will be displayed and you will see a new data input method called REST. Click on this link, then click on the “New” green button to bring up a new REST input configuration screen.

10. Ok, last step! We are going to complete the configuration details to get our Twitter data. I have only included the fields you need to configure and everything else can be left blank, unless you need to enter in a proxy to get out to the internet.
> REST API Input Name: Twitter (or whatever you want to call the feed)
> Endpoint URL: https://stream.twitter.com/1.1/statuses/filter.json
> HTTP Method: GET
> Authentication Type: oauth1
> OAUTH1 Client Key, Client Secret, Access Token, Access Token Secret: Complete from your Twitter Developer configuration screen in Step 5 above.
> URL Arguments: track=#bigdata,#splunk^stall_warnings=true
The above URL arguments are examples. In this case, I am selecting to bring in tweets that contain the hashtag of #bigdata and #splunk. I am using the ‘track’ streaming API parameter to do this. At this point, you should read here: https://dev.twitter.com/docs/streaming-apis/parameters#track. Also note, that if you want to track multiple keywords, these are separated by a comma. However, the REST API configuration screen expects a comma delimeter between key=value pairs. Notice that I have used a ^ delimiter instead, as I need to use commas for my track values.
> Response Type: json
> Streaming Request: Yes (ensure the box is checked)
> Request Timeout: 86400
Here we are setting the timeout to be 86400 seconds which is the number of seconds in a day. As long as you have at least one tweet come through per day, then you will be ok. If the timeout window is less than the amount of time between tweets streaming in, then the data input will timeout and not recover without re-enabling the input or I would imagine a Splunk restart.
> Delimeter: ^ (or whatever delimeter you used in the URL arguments field)
> Set Sourcetype: Manual
> Sourcetype: Tweets (or whatever sourcetype name you want)
> More Settings: Yes (check the box). Optionally provide a host name and an index you want the tweets to go into. The default index is main.Note: For reference, the above configuration is stored in etc/system/local/inputs.conf

This is what the final screen will look like. Hit the “Save” button when everything looks good.
twitter_finalstep10

Search the Tweets!

You are all done! After hitting save, the tweets should start coming in immediately. Assuming you used a sourcetype of twitter, you can now go to the search bar in Splunk and run this query:

sourcetype=twitter earliest=-1h

You should see data coming in. You will notice that Twitter includes a TON of fields with each tweet – it is quite awesome actually. All the usernames, hashtags, users in the tweets, URLs (even translated URLs) are all extracted and searchable.

Of course, the above does simplify things. You should definitely read the the Twitter API documentation properly.

Posted in Boulot, SplunkCommentaires fermés sur Playing with Splunk and REST API