Skip to content

Commit

Permalink
Merge branch 'master' into mraunak/tf_win_clang
Browse files Browse the repository at this point in the history
  • Loading branch information
mraunak authored Apr 3, 2024
2 parents e944e76 + ff989f0 commit b69d741
Show file tree
Hide file tree
Showing 16 changed files with 198 additions and 194 deletions.
2 changes: 1 addition & 1 deletion site/en/guide/migrate/evaluator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@
"\n",
"In TensorFlow 1, you can configure a `tf.estimator` to evaluate the estimator using `tf.estimator.train_and_evaluate`.\n",
"\n",
"In this example, start by defining the `tf.estimator.Estimator` and speciyfing training and evaluation specifications:"
"In this example, start by defining the `tf.estimator.Estimator` and specifying training and evaluation specifications:"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion site/en/guide/sparse_tensor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -620,7 +620,7 @@
"\n",
"However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero. \n",
"\n",
"Note: This is generally not the intended usage of `tf.sparse.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
"Note: This is generally not the intended usage of `tf.sparse.SparseTensor`s; and you might want to also consider other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions site/en/guide/tf_numpy_type_promotion.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@
"* `f32*` means Python `float` or weakly-typed `f32`\n",
"* `c128*` means Python `complex` or weakly-typed `c128`\n",
"\n",
"The asterik (*) denotes that the corresponding type is “weak” - such a dtype is temporarily inferred by the system, and could defer to other dtypes. This concept is explained more in detail [here](#weak_tensor)."
"The asterisk (*) denotes that the corresponding type is “weak” - such a dtype is temporarily inferred by the system, and could defer to other dtypes. This concept is explained more in detail [here](#weak_tensor)."
]
},
{
Expand Down Expand Up @@ -449,7 +449,7 @@
"source": [
"### WeakTensor Construction\n",
"\n",
"WeakTensors are created if you create a tensor without specifing a dtype the result is a WeakTensor. You can check whether a Tensor is \"weak\" or not by checking the weak attribute at the end of the Tensor's string representation."
"WeakTensors are created if you create a tensor without specifying a dtype the result is a WeakTensor. You can check whether a Tensor is \"weak\" or not by checking the weak attribute at the end of the Tensor's string representation."
]
},
{
Expand Down
2 changes: 0 additions & 2 deletions site/en/guide/versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,12 +171,10 @@ incrementing the major version number for TensorFlow Lite, or vice versa.
The API surface that is covered by the TensorFlow Lite Extension APIs version
number is comprised of the following public APIs:

```
* [tensorflow/lite/c/c_api_opaque.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api_opaque.h)
* [tensorflow/lite/c/common.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h)
* [tensorflow/lite/c/builtin_op_data.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/builtin_op_data.h)
* [tensorflow/lite/builtin_ops.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/builtin_ops.h)
```

Again, experimental symbols are not covered; see [below](#not_covered) for
details.
Expand Down
2 changes: 1 addition & 1 deletion site/en/hub/tutorials/s3gan_generation_with_tf_hub.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
"2. Click **Runtime > Run all** to run each cell in order.\n",
" * Afterwards, the interactive visualizations should update automatically when you modify the settings using the sliders and dropdown menus.\n",
"\n",
"Note: if you run into any issues, youn can try restarting the runtime and rerunning all cells from scratch by clicking **Runtime > Restart and run all...**.\n",
"Note: if you run into any issues, you can try restarting the runtime and rerunning all cells from scratch by clicking **Runtime > Restart and run all...**.\n",
"\n",
"[1] Mario Lucic\\*, Michael Tschannen\\*, Marvin Ritter\\*, Xiaohua Zhai, Olivier\n",
" Bachem, Sylvain Gelly, [High-Fidelity Image Generation With Fewer Labels](https://arxiv.org/abs/1903.02271), ICML 2019."
Expand Down
2 changes: 1 addition & 1 deletion site/en/hub/tutorials/wiki40b_lm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@
" # Generate the tokens from the language model\n",
" generation_outputs = module(generation_input_dict, signature=\"prediction\", as_dict=True)\n",
"\n",
" # Get the probablities and the inputs for the next steps\n",
" # Get the probabilities and the inputs for the next steps\n",
" probs = generation_outputs[\"probs\"]\n",
" new_mems = [generation_outputs[\"new_mem_{}\".format(i)] for i in range(n_layer)]\n",
"\n",
Expand Down
91 changes: 43 additions & 48 deletions site/en/install/source.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,7 @@ Install the TensorFlow *pip* package dependencies (if using a virtual
environment, omit the `--user` argument):

<pre class="prettyprint lang-bsh">
<code class="devsite-terminal">pip install -U --user pip numpy wheel packaging requests opt_einsum</code>
<code class="devsite-terminal">pip install -U --user keras_preprocessing --no-deps</code>
<code class="devsite-terminal">pip install -U --user pip</code>
</pre>

Note: A `pip` version >19.0 is required to install the TensorFlow 2 `.whl`
Expand All @@ -60,30 +59,32 @@ file.

Clang is a C/C++/Objective-C compiler that is compiled in C++ based on LLVM. It
is the default compiler to build TensorFlow starting with TensorFlow 2.13. The
current supported version is LLVM/Clang 16.
current supported version is LLVM/Clang 17.

[LLVM Debian/Ubuntu nightly packages](https://apt.llvm.org) provide an automatic
installation script and packages for manual installation on Linux. Make sure you
run the following command if you manually add llvm apt repository to your
package sources:

<pre class="prettyprint lang-bsh">
<code class="devsite-terminal">sudo apt-get update && sudo apt-get install -y llvm-16 clang-16</code>
<code class="devsite-terminal">sudo apt-get update && sudo apt-get install -y llvm-17 clang-17</code>
</pre>

Now that `/usr/lib/llvm-17/bin/clang` is the actual path to clang in this case.

Alternatively, you can download and unpack the pre-built
[Clang + LLVM 16](https://github.com/llvm/llvm-project/releases/tag/llvmorg-16.0.0).
[Clang + LLVM 17](https://github.com/llvm/llvm-project/releases/tag/llvmorg-17.0.2).

Below is an example of steps you can take to set up the downloaded Clang + LLVM
16 binaries on Debian/Ubuntu operating systems:
17 binaries on Debian/Ubuntu operating systems:

1. Change to the desired destination directory: `cd <desired directory>`

1. Load and extract an archive file...(suitable to your architecture):
<pre class="prettyprint lang-bsh">
<code class="devsite-terminal">wget https://github.com/llvm/llvm-project/releases/download/llvmorg-16.0.0/clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
<code class="devsite-terminal">wget https://github.com/llvm/llvm-project/releases/download/llvmorg-17.0.2/clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04.tar.xz
</code>
<code class="devsite-terminal">tar -xvf clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
<code class="devsite-terminal">tar -xvf clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04.tar.xz
</code>
</pre>

Expand All @@ -93,10 +94,10 @@ Below is an example of steps you can take to set up the downloaded Clang + LLVM
have to replace anything, unless you have a previous installation, in which
case you should replace the files:
<pre class="prettyprint lang-bsh">
<code class="devsite-terminal">cp -r clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04/* /usr</code>
<code class="devsite-terminal">cp -r clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04/* /usr</code>
</pre>

1. Check the obtained Clang + LLVM 16 binaries version:
1. Check the obtained Clang + LLVM 17 binaries version:
<pre class="prettyprint lang-bsh">
<code class="devsite-terminal">clang --version</code>
</pre>
Expand Down Expand Up @@ -240,19 +241,6 @@ There are some preconfigured build configs available that can be added to the

## Build and install the pip package

The pip package is build in two steps. A `bazel build` commands creates a
"package-builder" program. You then run the package-builder to create the
package.

### Build the package-builder
Note: GPU support can be enabled with `cuda=Y` during the `./configure` stage.

Use `bazel build` to create the TensorFlow 2.x package-builder:

<pre class="devsite-terminal devsite-click-to-copy">
bazel build [--config=option] //tensorflow/tools/pip_package:build_pip_package
</pre>

#### Bazel build options

Refer to the Bazel
Expand All @@ -268,33 +256,42 @@ that complies with the manylinux2014 package standard.

### Build the package

The `bazel build` command creates an executable named `build_pip_package`—this
is the program that builds the `pip` package. Run the executable as shown
below to build a `.whl` package in the `/tmp/tensorflow_pkg` directory.
To build pip package, you need to specify `--repo_env=WHEEL_NAME` flag.
depending on the provided name, package will be created, e.g:

To build from a release branch:
To build tensorflow CPU package:
<pre class="devsite-terminal devsite-click-to-copy">
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu
</pre>

To build tensorflow GPU package:
<pre class="devsite-terminal devsite-click-to-copy">
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda
</pre>

To build from master, use `--nightly_flag` to get the right dependencies:
To build tensorflow TPU package:
<pre class="devsite-terminal devsite-click-to-copy">
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_tpu --config=tpu
</pre>

To build nightly package, set `tf_nightly` instead of `tensorflow`, e.g.
to build CPU nightly package:
<pre class="devsite-terminal devsite-click-to-copy">
./bazel-bin/tensorflow/tools/pip_package/build_pip_package --nightly_flag /tmp/tensorflow_pkg
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tf_nightly_cpu
</pre>

Although it is possible to build both CUDA and non-CUDA configurations under the
same source tree, it's recommended to run `bazel clean` when switching between
these two configurations in the same source tree.
As a result, generated wheel will be located in
<pre class="devsite-terminal devsite-click-to-copy">
bazel-bin/tensorflow/tools/pip_package/wheel_house/
</pre>

### Install the package

The filename of the generated `.whl` file depends on the TensorFlow version and
your platform. Use `pip install` to install the package, for example:

<pre class="devsite-terminal prettyprint lang-bsh">
pip install /tmp/tensorflow_pkg/tensorflow-<var>version</var>-<var>tags</var>.whl
pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl
</pre>

Success: TensorFlow is now installed.
Expand Down Expand Up @@ -344,26 +341,23 @@ virtual environment:

1. Optional: Configure the build—this prompts the user to answer build
configuration questions.
2. Build the tool used to create the *pip* package.
3. Run the tool to create the *pip* package.
4. Adjust the ownership permissions of the file for outside the container.
2. Build the *pip* package.
3. Adjust the ownership permissions of the file for outside the container.

<pre class="devsite-disable-click-to-copy prettyprint lang-bsh">
<code class="devsite-terminal tfo-terminal-root">./configure # if necessary</code>

<code class="devsite-terminal tfo-terminal-root">bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package</code>

<code class="devsite-terminal tfo-terminal-root">./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package</code>

<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu --config=opt</code>
`
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
</pre>

Install and verify the package within the container:

<pre class="prettyprint lang-bsh">
<code class="devsite-terminal tfo-terminal-root">pip uninstall tensorflow # remove current version</code>

<code class="devsite-terminal tfo-terminal-root">pip install /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">cd /tmp # don't import from source directory</code>
<code class="devsite-terminal tfo-terminal-root">python -c "import tensorflow as tf; print(tf.__version__)"</code>
</pre>
Expand Down Expand Up @@ -401,19 +395,17 @@ with GPU support:
<pre class="devsite-disable-click-to-copy prettyprint lang-bsh">
<code class="devsite-terminal tfo-terminal-root">./configure # if necessary</code>

<code class="devsite-terminal tfo-terminal-root">bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package</code>

<code class="devsite-terminal tfo-terminal-root">./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package</code>
<code class="devsite-terminal tfo-terminal-root">bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=opt</code>

<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
</pre>

Install and verify the package within the container and check for a GPU:

<pre class="prettyprint lang-bsh">
<code class="devsite-terminal tfo-terminal-root">pip uninstall tensorflow # remove current version</code>

<code class="devsite-terminal tfo-terminal-root">pip install /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
<code class="devsite-terminal tfo-terminal-root">cd /tmp # don't import from source directory</code>
<code class="devsite-terminal tfo-terminal-root">python -c "import tensorflow as tf; print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))"</code>
</pre>
Expand All @@ -430,6 +422,7 @@ Success: TensorFlow is now installed.

<table>
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang 17.0.6</td><td>Bazel 6.5.0</td></tr>
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td></tr>
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td></tr>
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td></tr>
Expand Down Expand Up @@ -468,6 +461,7 @@ Success: TensorFlow is now installed.

<table>
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th><th>cuDNN</th><th>CUDA</th></tr>
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang 17.0.6</td><td>Bazel 6.5.0</td><td>8.9</td><td>12.3</td></tr>
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td><td>8.9</td><td>12.2</td></tr>
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td><td>8.7</td><td>11.8</td></tr>
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td><td>8.6</td><td>11.8</td></tr>
Expand Down Expand Up @@ -508,6 +502,7 @@ Success: TensorFlow is now installed.

<table>
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang from xcode 13.6</td><td>Bazel 6.5.0</td></tr>
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 6.1.0</td></tr>
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 6.1.0</td></tr>
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 5.3.0</td></tr>
Expand Down
2 changes: 1 addition & 1 deletion site/en/r1/guide/autograph.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@
"id": "m-jWmsCmByyw"
},
"source": [
"AutoGraph supports common Python statements like `while`, `for`, `if`, `break`, and `return`, with support for nesting. Compare this function with the complicated graph verson displayed in the following code blocks:"
"AutoGraph supports common Python statements like `while`, `for`, `if`, `break`, and `return`, with support for nesting. Compare this function with the complicated graph version displayed in the following code blocks:"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions site/en/r1/guide/distribute_strategy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
"## Types of strategies\n",
"`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:\n",
"\n",
"* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.\n",
"* Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.\n",
"* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.\n",
"\n",
"In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF."
Expand Down Expand Up @@ -371,7 +371,7 @@
"id": "hQv1lm9UPDFy"
},
"source": [
"So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end."
"So far we've talked about what are the different strategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end."
]
},
{
Expand Down
Loading

0 comments on commit b69d741

Please sign in to comment.