If you work with data, you need to understand what that data actually is, both in terms of content and context. This is how you determine what you can use the data for, the correct statistical techniques to use and the validity of your findings. Simple numeracy is not enough, you need to understand what your numbers are.
Politics is a vast subject area. It is not just about being an MP or predicting the outcome of elections. If you think of all the government departments and what their policies cover – business, health, education, employment, the benefits system, justice, migration, international trade and so on, all that is politics, and all of it needs people who understand those specific issues and can analyse data about them whether as part of some level of government, the civil service or elsewhere. Sometimes some of the data could be analysed by someone who was simply a good mathematician with some limited passing knowledge, and some of it is more in the domain of economists, social statisticians, behavioural scientists and so on, but there is a lot of work that can, and should, be done by people who know about politics.
So, for example, in forecasting of real world events, you build certain assumptions into your modelling simply because you have to due to the complexity and uncertainty of real things. A numerate person could easily build a model containing anything they want and run numbers but it is your knowledge of the topic and of the context in which your existing data was collected that allows you to assess whether your theoretical assumptions are reasonable and to understand things like exchangeability and stationarity, which in effect determine the extent to which you can assume that models built using data on things that have already happened can be used to predict things that will happen in the future and what statistical techniques you can use to improve the predictive validity of your model.
Similarly, due to politics dealing with complex real-world phenomena, there are an enormous number of potentially confounding factors that mean that while something may appear to directly correlated with something else, or even show causal inference, actually there are one or more factors that are mediating that relationship. Even if you had data for all possible confounding variables, which you wouldn’t, trying to test all of them in regressions because you have no real knowledge of what factors might be important to include would come with a high risk of spurious correlations and alpha errors appearing as a result of data dredging, as well as collinearity and multicollinearity caused by your explanatory variables being too closely correlated with each other. Without knowledge of your data as anything other than abstract numbers it is much more difficult to prevent these problems, and things like omitted variable bias, to spot them when they do occur and decide what, if anything, you should do about them. You can do nothing with what you have found because you don't know if it means anything at all when applied to the real world or whether you have just found some nice patterns in some numbers that are meaningless or even actually misleading.
There is a joke about a bored scientist who decided to do an experiment with some flies:
The scientist captured a fly and sedated it. When the fly came around, the scientist shouted at it to fly away and the fly did.
The scientist then captured another fly, sedated it and cut off one of its wings. When the fly came around, the scientist shouted at it to fly away and the fly flapped its one wing and tried to fly away.
The scientist then captured a third fly, sedated it and cut off both its wings. When the fly came around, the scientist again shouted at it to fly away but the fly didn't even try to fly away.
The scientist's conclusion: If you cut both wings off a fly it becomes deaf.