If you take a serial program and a parallel version of the same program, then the parallel program must perform some operations that are not performed by the sequential program, in particular operations related to coordinating the operations of several processors. They contribute to what is often called "parallel overhead" - the extra work that a parallel program must do. This is one of the factors that makes it difficult to get 2x acceleration on two processors, 4x on 4 or 32000x on 32000 processors.
If you study the parallel program code, you will often find segments that are serial and that use only one processor, and the rest are idle. There are some (fragments) of algorithms that are not parallel, and there are some operations that often do not parallelize, but which can be: input-output operations, for example, to parallelize them, you need some kind of parallel input-output system. This "serial fraction" provides the minimum time for your calculations. Amdahl Law explains this, and this article provides a useful starting point for your further reading.
Even if you have a program that parallelizes well, scaling (i.e., changing the speed with increasing number of processors) is not equal to 1. For most parallel programs, the size of parallel overheads (or the sum of the processor time, which is devoted to operations that are necessary only for parallel computing) increases as a function of the number of processors. This often means that adding processors adds parallel overhead, and at some point in scaling your program and jobs, an increase in overhead cancels (or even vice versa) an increase in processor power. The Amdahl Law article also covers Gustafson's Law, which is relevant here.
I formulated this all in very general terms, not taking into account the current processor and computer architecture; what I am describing is the parallel computing functions (as currently understood) of not any particular program or computer.
I do not agree with the statement of @ Daniel Pittman that these questions are purely theoretical. Some of us are working very hard to scale our programs to a very large number of processors (1000). And almost all desktop and office developments today, as well as most mobile developments, are aimed at multiprocessor systems and the use of all these cores is a serious problem.
Finally, to answer your question, at what point does adding processors no longer increase execution speed, now this is a question depending on the architecture and program. Fortunately, this lends itself to empirical research. Finding out the scalability of parallel programs and identifying ways to improve it is a growing niche in the software development profession.