<blockquote> npm ERR! Tracker "idealTree" already exists while creating the Docker image for Node project

About this answer

I have created one node.js project called simpleWeb. The project contains package.json and index.js.

index.js

    const express = require('express');
    

const app = express();
    

app.get('/', (req, res) => {
res.send('How are you doing');
});
    

app.listen(8080, () => {
console.log('Listening on port 8080');
});

This answer is Part II of the accepted answer above.


package.json


{
"dependencies": {
"express": "*"
},
"scripts": {
"start": "node index.js"
}
}

7. Naive vs. Pool's Chunksize-Algorithm

I have also created one Dockerfile to create the docker image for my node.js project.

Before going into details, consider the two gifs below. For a range of different iterable lengths, they show how the two compared algorithms chunk the passed iterable (it will be a sequence by then) and how the resulting tasks might be distributed. The order of workers is random and the number of distributed tasks per worker in reality can differ from this images for light taskels and or taskels in a Wide Scenario. As mentioned earlier, overhead is also not included here. For heavy enough taskels in a Dense Scenario with neglectable transmitted data-sizes, real computations draw a very similar picture, though.

cs_4_50

cs_200_250

Dockerfile

# Specify a base image
FROM node:alpine


# Install some dependencies
COPY ./ ./
RUN npm install


# Default command
CMD ["npm", "start"]

While I am tried to build the docker image using "docker build ." command it is throwing below error.

As shown in chapter "5. Pool's Chunksize-Algorithm", with Pool's chunksize-algorithm the number of chunks will stabilize at n_chunks == n_workers * 4 for big enough iterables, while it keeps switching between n_chunks == n_workers and n_chunks == n_workers + 1 with the naive approach. For the naive algorithm applies: Because n_chunks % n_workers == 1 is True for n_chunks == n_workers + 1, a new section will be created where only a single worker will be employed.

Error Logs

simpleweb » docker build .                                                    ~/Desktop/jaypal/Docker and Kubernatise/simpleweb
[+] Building 16.9s (8/8) FINISHED
=> [internal] load build definition from Dockerfile                                                                         0.0s
=> => transferring dockerfile: 37B                                                                                          0.0s
=> [internal] load .dockerignore                                                                                            0.0s
=> => transferring context: 2B                                                                                              0.0s
=> [internal] load metadata for docker.io/library/node:alpine                                                               8.7s
=> [auth] library/node:pull token for registry-1.docker.io                                                                  0.0s
=> [internal] load build context                                                                                            0.0s
=> => transferring context: 418B                                                                                            0.0s
=> [1/3] FROM docker.io/library/node:alpine@sha256:5b91260f78485bfd4a1614f1afa9afd59920e4c35047ed1c2b8cde4f239dd79b         0.0s
=> CACHED [2/3] COPY ./ ./                                                                                                  0.0s
=> ERROR [3/3] RUN npm install                                                                                              8.0s
------
> [3/3] RUN npm install:
#8 7.958 npm ERR! Tracker "idealTree" already exists
#8 7.969
#8 7.970 npm ERR! A complete log of this run can be found in:
#8 7.970 npm ERR!     **/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log**
------
executor failed running [/bin/sh -c npm install]: exit code: 1

Naive Chunksize-Algorithm:

The log file above it is providing one path "/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log" where I can find the full logs.

But, The above file is not present on my local machine.

I don't understand what is the issue.

88970 次浏览

Building on the answer of Col, you could also do the following in your viewmodel:

public class IndexVM() {


@AfterCompose
public void doAfterCompose(@ContextParam(ContextType.COMPONENT) Component c) {
Window wizard = (Window) c;
Label label = (Label) c.getFellow("lblName");
....
}
}

You might think you created tasks in the same number of workers, but this will only be true for cases where there is no remainder for len_iterable / n_workers. If there is a remainder, there will be a new section with only one task for a single worker. At that point your computation will not be parallel anymore.

In doing so, you actually have access to the label object and can perform all sorts of tasks with it (label.setValue(), label.getValue(), etc.).

Below you see a figure similar to the one shown in chapter 5, but displaying the number of sections instead of the number of chunks. For Pool's full chunksize-algorithm (n_pool2), n_sections will stabilize at the infamous, hard coded factor 4. For the naive algorithm, n_sections will alternate between one and two.

figure10

This issue is happening due to changes in NodeJS starting with version 15. When no WORKDIR is specified, npm install is executed in the root directory of the container, which is resulting in this error. Executing the npm install in a project directory of the container specified by WORKDIR resolves the issue.

Use the following Dockerfile:

# Specify a base image
FROM node:alpine


#Install some dependencies


WORKDIR /usr/app
COPY ./ /usr/app
RUN npm install


# Set up a default command
CMD [ "npm","start" ]
# Specify a base image
FROM node:alpine


WORKDIR /usr/app


# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./


# Default command
CMD ["npm","start"]

1.This is also write if you change any of your index file and docker build and docker run it automatically change your new changes to your browser output

The correct answer is basically right, but when I tried it still didn't work. Here's why:

For Pool's chunksize-algorithm, the stabilization at n_chunks = n_workers * 4 through the before mentioned extra-treatment, prevents creation of a new section here and keeps the Idling Share limited to one worker for long enough iterables. Not only that, but the the algorithm will keep shrinking the relative size of the Idling Share, which leads to an RDE value converging towards 100%.

WORKDIR specifies the context to the COPY that follows it. Having already specified the context in ./usr/app it is wrong to ask to copy from ./ (the directory you are working in) to ./usr/app as this produces the following structure in the container: ./usr/app/usr/app.

As a result CMD ["npm", "start"], which is followed where specified by WORKDIR (./usr/app) does not find the package.json.

I suggest using this Dockerfile:

FROM node:alpine


WORKDIR /usr/app


COPY ./ ./


RUN npm install


CMD ["npm", "start"]

"Long enough" for n_workers=4 is len_iterable=210 for example. For iterables equal or bigger than that, the Idling Share will be limited to one worker, a trait originally lost because of the 4-multiplication within the chunksize-algorithm in the first place.

figure11

The above-given solutions didn't work for me, I have changed the node image in my from node:alpine to node:12.18.1 and it worked.

The naive chunksize-algorithm also converges towards 100%, but it does so slower. The converging effect solely depends on the fact that the relative portion of the tail shrinks for cases where there will be two sections. This tail with only one employed worker is limited to x-axis length n_workers - 1, the possible maximum remainder for len_iterable / n_workers.

You should specify the WORKDIR prior to COPY instruction in order to ensure the execution of npm install inside the directory where all your application files are there. Here is how you can complete this:

WORKDIR /usr/app


# Install some dependencies
COPY ./ ./
RUN npm install

How do actual RDE values differ for the naive and Pool's chunksize-algorithm?

Note that you can simply "COPY ./ (current local directory) ./ (container directory which is now /usr/app thanks to the WORKDIR instruction)" instead of "COPY ./ /usr/app"

Below you find two heatmaps showing the RDE values for all iterable lengths up to 5000, for all numbers of workers from 2 up to 100.

Now the good reason to use WORKDIR instruction is that you avoid mixing your application files and directories with the root file system of the container (to avoid overriding file system directories in case you have similar directories labels on your application directories)

The color-scale goes from 0.5 to 1 (50%-100%). You will notice much more dark areas (lower RDE values) for the naive algorithm in the left heatmap. In contrast, Pool's chunksize-algorithm on the right draws a much more sunshiny picture.

figure12

One more thing. It is a good practice to segment a bit your configuration so that when you make a change for example in your index.js (so then you need to rebuild your image), you will not need to run "npm install" while the package.json has not been modified.

The diagonal gradient of lower-left dark corners vs. upper-right bright corners, is again showing the dependence on the number of workers for what to call a "long iterable".

How bad can it get with each algorithm?

Your application is very basic, but think of a big applications where "npm install" should take several minutes.

In order to make use of caching process of Docker, you can segment your configuration as follows:

WORKDIR /usr/app


# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./

With Pool's chunksize-algorithm a RDE value of 81.25 % is the lowest value for the range of workers and iterable lengths specified above:

figure13

This instructs Docker to cache the first COPY and RUN commands when package.json is not touched. So when you change for instance the index.js, and you rebuild your image, Docker will use cache of the previous instructions (first COPY and RUN) and start executing the second COPY. This makes your rebuild much quicker.

Example for image rebuild:

 => CACHED [2/5] WORKDIR /usr/app                                                                                                       0.0s
=> CACHED [3/5] COPY ./package.json ./                                                                                                 0.0s
=> CACHED [4/5] RUN npm install                                                                                                        0.0s
=> [5/5] COPY ./ ./

On current latest node:alpine3.13 it's just enough to copy the content of root folder into container's root folder with COPY ./ ./, while omitting the WORKDIR command. But as a practical solution I would recommend:

WORKDIR /usr/app - it goes by convention among developers to put project into separate folder
COPY ./package.json ./ - here we copy only package.json file in order to avoid rebuilds from npm

With the naive chunksize-algorithm, things can turn much worse. The lowest calculated RDE here is 50.72 %. In this case, nearly for half of the computation time just a single worker is running! So, watch out, proud owners of Knights Landing. ;)

figure14


8. Reality Check

RUN npm install
COPY ./ ./ - here we copy all the files (remember to create .dockerignore file in the root dir to avoid copying your node_modules folder)

In the previous chapters we considered a simplified model for the purely mathematical distribution problem, stripped from the nitty-gritty details which make multiprocessing such a thorny topic in the first place. To better understand how far the Distribution Model (DM) alone can contribute to explain observed worker utilization in reality, we will now take some looks at Parallel Schedules drawn by real computations.

We also have a similar issue, So I replaced my npm with 'yarn' it worked quit well. here is the sample code.

FROM python:3.7-alpine


ENV CRYPTOGRAPHY_DONT_BUILD_RUST=1
#install bash
RUN apk --update add bash zip yaml-dev
RUN apk add --update nodejs yarn build-base postgresql-dev gcc python3-
dev musl-dev libffi-dev
RUN yarn config set prefix ~/.yarn






#install serverless
RUN yarn global add serverless@2.49.0 --prefix /usr/local && \
yarn global add serverless-pseudo-parameters@2.4.0 && \
yarn global add serverless-python-requirements@4.3.0




RUN mkdir -p /code
WORKDIR /code


COPY requirements.txt .
COPY requirements-test.txt .


RUN pip install --upgrade pip
RUN pip install -r requirements-test.txt


COPY . .


CMD ["bash"]

Setup

Global install

The following plots all deal with parallel executions of a simple, cpu-bound dummy-function, which gets called with various arguments so we can observe how the drawn Parallel Schedule varies in dependence of the input values. The "work" within this function consists only of iteration over a range object. This is already enough to keep a core busy since we pass huge numbers in. Optionally the function takes some taskel-unique extra data which is just returned unchanged. Since every taskel comprises the exact same amount of work, we are still dealing with a Dense Scenario here.

In the event you're wanting to install a package globally outside of working directory with a package.json, you should use the -g flag.

npm install -g <pkg>

The function is decorated with a wrapper taking timestamps with ns-resolution (Python 3.7+). The timestamps are used to calculate the timespan of a taskel and therefore enable the drawing of an empiric Parallel Schedule.

@stamp_taskel
def busy_foo(i, it, data=None):
"""Dummy function for CPU-bound work."""
for _ in range(int(it)):
pass
return i, data




def stamp_taskel(func):
"""Decorator for taking timestamps on start and end of decorated
function execution.
"""
@wraps(func)
def wrapper(*args, **kwargs):
start_time = time_ns()
result = func(*args, **kwargs)
end_time = time_ns()
return (current_process().name, (start_time, end_time)), result
return wrapper

This error may trigger if the CI software you're using like semantic-release is built in node and you attempt to install it outside of a working directory.

Pool's starmap method is also decorated in such a way that only the starmap-call itself is timed. "Start" and "end" of this call determine minimum and maximum on the x-axis of the produced Parallel Schedule.

Try npm init and npm install express to create package.json file

We're going to observe computation of 40 taskels on four worker processes on a machine with these specs:

you can to specify node version less than 15.

# Specify a base image
FROM node:14


# Install some dependencies
COPY ./ ./
RUN npm install


# Default command
CMD ["npm", "start"]

The command assumes it is run from the root of the project and there is a package.json file present. The -v $(pwd):/app option mounts the current working directory to the /app folder in the container, synchronizing the installed files back to the host directory. The -w /app option sets the work directory of the image as the /app folder. The --loglevel=verbose option causes the output of install command to be verbose. More options can be found on the official Node docker hub page.

docker run --rm -v $(pwd):/app -w /app node npm install --loglevel=verbose

I am getting this error when I try to pod repo update and pod install

Personally I use a Makefile to store several Ephemeral container commands that are faster to run separate from the build process. But of course, anything is possible :)

CDN: trunk URL couldn't be downloaded:

specifying working directory as below inside Dockerfile will work:

WORKDIR '/app'
https://cdn.cocoapods.org/deprecated_podspecs.txt, error: Failed to

make sure to use --build in your docker-compose command to build from Dockerfile again:

docker-compose up --build
open TCP connection to cdn.cocoapods.org:443 (No route to host -

Maybe you can change node version.Besides don't forget WORKDIR

FROM node:14-alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]