This paper establishes a new high-probability uniform error bound for function estimation in reproducing kernel Hilbert spaces (RKHS) using kernel ridge regression (KRR) with adaptively collected data. Our proof involves two key ingredients: (i) a concentration inequality for the adaptively weighted average noise in the KRR estimator, developed through an exponential super-martingale approach, and (ii) the peeling technique in empirical process theory. We apply the new error bound to regret analysis of Gaussian process (GP) bandit algorithms, specifically GP Upper Confidence Bound and GP Thompson Sampling. The resulting cumulative regret upper bounds match known minimax lower bounds (up to logarithmic factors) for estimating RKHS functions. This match resolves an open problem in GP bandits literature while demonstrating the tightness of our uniform error bound.