For Michel Guiblain of Neocles, “the real plus of Splunk is its ease of installation and deployment even on large heterogeneous infrastructure like ours with many different technical equipment. As a production manager, as soon as I want to do something or add something, I start by deploying an agent to integrate it into the overall architecture provided by Splunk. ”
An opinion shared by Olivier Ondet of OBS for whom “Splunk can quickly deploy a solution to collect and explore data, to validate a concept or measure an ROI before an industrialization phase. Splunk also allows, during the generalization, to optimize the total cost of the projects we deploy by optimizing both the infrastructure and the number of ‘man / days’ needed.
Its ease to ride in charge is also unanimously welcomed. Just add as many servers as you need. Incoming data is automatically distributed and search performance increases linearly with the number of machines.
Among the other qualities of Splunk frequently cited, we will remember the richness of its language SPL , its very agnostic side carried by its “Universal Forwarder” and its multiple Splunk Add-ons , the flexibility of its operation which avoids having to “parse The upstream data, and of course the extensibility provided by its Apps mechanism.
For Philippe Borrel, there is another quality not to be neglected “the usability and the number of graphic assistants available as well as the many clear tutorials although almost exclusively provided by the publisher in English, make Splunk a very good tool for to approach data science for those who are not or little initiated “.
The limits of Splunk
Splunk has long been criticized for its inability to simply and automatically correlate or compress data to limit the volume of disk storage.
These reproaches have been erased by the innovations introduced by versions 6.2, 6.3 and 6.4 of Splunk. Today, the main obstacle to its expansion remains its Licensing model. Splunk limits the number of new data that can be indexed per day. There is a free version but it capped at 500 MB / days.
When you acquire a Splunk Enterprise license, you buy an indexing right for a certain amount of data added to Splunk on a daily basis, regardless of the length of data retention, the number of users or servers.
“You have to know how to deploy Splunk in a careful and controlled way. “Explains Pierre Kirchner, whose implementation Splunk ingest about 500 GB per day. “At Natixis, we have set up filtering mechanisms to index only the relevant data. For many devices, we take all Logs because the volume is not huge and parsers already exist as standard. But on other technologies – and especially on relatively verbose Microsoft technologies – filters have been put in place to contain the volume of logs.
And to add: “to define well our filtering, we started from the definition of the need, in other words from what one wanted to look for and the scenarios that one wanted to put in place. For example, we have worked with the experts of the Active Directory to select the really relevant data. In the end, the management of data to be ingested by Splunk is a collaborative work between several entities of the ISD. ”
An opinion shared by Michael Guiblain who considers it essential “to have people who know the infrastructure by heart to sort the right data and avoid indexing unnecessary data.”
He emphasizes, however, that tariffs being heavily degressive with the volume, only the first tranches are complicated to budget. In fact, the time that the product proves itself to the directions.