It all depends on which parameters you put inside O(...).
If target weight is limited by number W, then problem has O(n*W) complexity, as you mentioned.
But if weights are way too big and you need algorithm with complexity independent of W, then problem is NP-complete. (O(2^n*n) in most naive implementation).
To understand NP-completeness, you have to learn a bit of complexity theory. However, basically, it's NP-complete because an efficient algorithm for the knapsack problem would also be an efficient algorithm for SAT, TSP and the rest.
O(n*W) looks like a polynomial time, but it is not, it is pseudo-polynomial.
Time complexity measures the time that an algorithm takes as a function of the length in bits of its input. The dynamic programming solution is indeed linear in the value of W, but exponential in the length of W — and that's what matters!
More precisely, the time complexity of the dynamic solution for the knapsack problem is basically given by a nested loop:
// here goes other stuff we don't care about
for (i = 1 to n)
for (j = 0 to W)
// here goes other stuff
Thus, the time complexity is clearly O(n*W).
What does it mean to increase linearly the size of the input of the algorithm? It means using progressively longer item arrays (so n, n+1, n+2, ...) and progressively longer W (so, if W is x bits long, after one step we use x+1 bits, then x+2 bits, ...). But the n+10 of W grows exponentially with x, thus the algorithm is not really polynomial, it's exponential (but it looks like it is polynomial, hence the name: "pseudo-polynomial").