Over the last 10 years, governments have been seeking the ideal set of enablers to realize their dream of “advanced information technology use” within every aspects of life. Some of them set solid targets such as “switching to mobile government (or e-government) in 20xx” and they tried to take certain structural actions, injected millions of dollars into the economy in order to foster the analytical environment, but somehow this journey has never ended.
I do not want to leverage “it is a journey, not a destination” type of mentality, whereas we have to admit that advanced technology can not be brought by a magical touch overnight. It requires certain level of patience and an harmony between multiple factors. This harmony should be in place in different levels across different phases of deployment.
These phases are still not clear, and the levels depend on the maturity level of each government. But there are certain generic elements that help governments assess their analytical maturity. We can break it down into four main enablers:
1) Scarcity of talents: The public entities in most of the countries do not have any clear career path for analytical experts
2) Day to day work: This keeps busy and provides an excuse for employees to complain about while they do not fulfill their analytical improvement related tasks
3) Accessibility barriers: Lack of transparency and chain of command are the biggest enemies of public entities, causing them to function in sub-optimal level. We can also add legal constraints, culture of secrecy and reluctance due to lack of incentives
4) Demand barriers: What if citizens are not aware of what governments provide? Or what if citizens do not trust their governments?
5) Availability of information: How is the availability of information / data? Is the data well structured?
6) Security & privacy: How does the entity overcome the sensitivity of data acquisition, storage, use and retention?
7) Accuracy: Is the information accurate? Is there any quality control in place?
8) Decentralized repository: Is the data spread among different entities?
9) Findability & searchability: Is the data findable?
10) Collection of data: Do the employees know the data context? What about the cultural context? These two are crucial especially when they transfer sentiments to hard data
11) Analysis: Are the employees sure that the samples are not biased? Do they ensure that they are not diving into a loop of apophenia?
12) Usage: Are the interpretations accurate? Are there conflicting insights?
13) Capacity of tools: Computational capacity, storage capacity, speed etc.
14) Accuracy of tools: Does the public entity have right tools for the right purposes?
15) Integration of tools: Are the tools aligned across the entire organization?
16) Cost of tools: Does the public entity conduct exercise to measure the cost efficiency?
This list can continue with further criteria such as stability, persistence, leadership and so on. The reason I haven’t added is because those elements are critical for governments by default, no matter what they want to deploy.
Image source: Flickr