We propose a way to accelerate the solution of symmetric and positive definite linear systems through the computation of some partial spectral information related to the ill-conditioned part of the given coefficient matrix. To achieve this, we combine the block Conjugate Gradient (blockCG) algorithm with the inverse subspace iteration to build a purely iterative algorithm that we call BlockCGSI. We analyze the convergence of the blockCG algorithm and exploit the possibility of reducing the total amount of computational work by controlling the accuracy during the solution of linear systems at each inverse iteration. We also improve the global convergence of this technique by using Chebyshev polynomials as a spectral filtering tool when building the starting vectors. The concept of \emph{``sliding window''} was also introduced as an algorithmic feature that enables the computation of a \emph{near}-invariant subspace of any size. The proposed method is adequate for large scale problems where we need to solve consecutively several linear systems with the same coefficient matrix (or with matrices that present very close spectral properties) but with changing right-hand sides. The spectral information computed by the BlockCGSI algorithm is then used to remove the effect of the smallest eigenvalues in two different ways: either by building a Spectral Low Rank Update (SLRU) preconditioner that basically adds the value $1$ to these eigenvalues, or by performing a deflation of the initial residual in order to remove part of the solution corresponding to the smallest eigenvalues. Both techniques yield important reductions of the total number of iterations and computational work in each subsequent runs of the Conjugate Gradient algorithm. We report on experiments on a 2D diffusion equation. The analysis of costs and benefits, in terms of floating point operations, helps to validate the strategy as a good way to speed up the solution of symmetric and positive definite linear systems.