Kind of, but not quite.
The null hypothesis is either true, or it isn't - there's no probability assigned to it. A given vaccine either reduces cases of Covid, or it doesn't.
For the p-value, we assume that the null hypothesis is correct, and then the p-value denotes the probability of observing our observed data (or data more 'extreme') given the null hypothesis to be true. So we assume that our Covid vaccine does nothing, but then when we see the difference in case rates across the two placebo and the vaccine cohort, the p-value will give an indication of how likely this difference was to happen, under the assumption that there should be no difference.
I don't mean to be a nitpicky bore, but I just thought it might be interesting. And I've heard complaints from a lot of statisticians that p-values are so often misunderstood and misused, especially in medical statistics. To the point where research is being done in such a way that gives the best chance for a 'statistically significant' result, even if the corresponding research is no good (and I've experienced similar to this on a project I was helping on, where p-values were the be-all and end-all, despite us trying to say why they were pointless in our case).