We received feedback from readers about this issue. Thank you for your feedback.
"I'm working through the 'Image Generation AI Stable Diffusion Start Guide' and I'm stuck on 'Section 2-2: Setting up the environment in Google Colab.' I'm getting an error when trying to download the model on page 46. I'm using a paid Google Colab Pro account and have plenty of compute units remaining. Here's the end of the error message..."
(The error message will be included at the end of this document.)
While the official AUTOMATIC1111/Stable-Diffusion-WebUI has not been maintained since version 1.10.0, the community has continued maintenance by TheLastBen. Additionally, the new Flux.1 model is now supported, and the WebUI has been updated to version 1.10.1. However, maintaining the versions of protobuf and related libraries has become more complex.
One of the reasons for the prolonged work was the long-term support (LTS) maintenance by AICU on July 29, 2024. However, due to the deep-rooted nature of the problem, we initially rolled back the LTS and provided a hotfix as an urgent solution on December 28.
https://github.com/aicuai/Book-StartGuideSDXL
Currently, AUTOMATIC1111 is operational.
version: v1.10.1
python: 3.10.12
torch: 2.5.1+cu121
xformers: 0.0.27.post2
gradio: 3.41.2
checkpoint: 1449e5b0b9 (AnimagineXL3.0)
*AnimagineXL3.0 is set as the default to match the book's instructions.
The following are the major updates since the first edition:
2024/12/28: Implemented a hotfix compatible with the 3rd printing of the book.
https://j.aicu.ai/SBXL1 https://j.aicu.ai/SBXL2
2024/11/18: Added a section on creating your own LoRA (Chapter 6.3 in the 3rd printing). https://j.aicu.ai/SDLoRA2
2024/07/29: Detailed explanation of our efforts towards long-term support (LTS) for TheLastBen's "Fast Stable Diffusion - AUTOMATIC1111" on Google Colab.https://note.com/aicu/n/nf5562077c8ad
2024/07/28: Added support for AUTOMATIC1111 v1.10.x. https://j.aicu.ai/SBXL1 https://j.aicu.ai/SBXL2
The current hotfix is based on TheLastBen's November 21, 2024 version Commit a12b351, running AUTOMATIC1111-v1.10.1. The previous LTS setup has been temporarily removed. Moving forward, we plan to further enhance stability and work towards a more permanent LTS solution, while monitoring support for xformers: 0.0.27.post2 on Google Colab.
https://aicu.jp/n/nf5562077c8ad
These corrections are reflected in the third printing of the book.
We also encourage you to refer to our publisher's support site.
https://www.sbcr.jp/support/4815617836/
We hope you continue to enjoy the "SD Yellow Book" in the new year!
Authors: AICU media, Akihiko Shirai
Appendix: The reported error is included below:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1146, in _get_module
return importlib.import_module("." + module_name, self.name)
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 83, in
from accelerate import version as accelerate_version
File "/usr/local/lib/python3.10/dist-packages/accelerate/init.py", line 7, in
from .accelerator import Accelerator
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 33, in
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File "/usr/local/lib/python3.10/dist-packages/accelerate/tracking.py", line 29, in
from torch.utils import tensorboard
File "/usr/local/lib/python3.10/dist-packages/torch/utils/tensorboard/init.py", line 12, in
from .writer import FileWriter, SummaryWriter # noqa: F401
File "/usr/local/lib/python3.10/dist-packages/torch/utils/tensorboard/writer.py", line 13, in
from tensorboard.compat.proto import event_pb2
File "/usr/local/lib/python3.10/dist-packages/tensorboard/compat/proto/event_pb2.py", line 17, in
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 16, in
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorboard/compat/proto/tensor_shape_pb2.py", line 36, in
_descriptor.FieldDescriptor(
File "/usr/local/lib/python3.10/dist-packages/google/protobuf/descriptor.py", line 553, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:Downgrade the protobuf package to 3.20.x or lower.
Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 13, in
initialize.imports()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/initialize.py", line 17, in imports
import pytorch_lightning # noqa: F401
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/init.py", line 34, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/init.py", line 14, in
from pytorch_lightning.callbacks.callback import Callback
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/callback.py", line 25, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/types.py", line 28, in
from torchmetrics import Metric
File "/usr/local/lib/python3.10/dist-packages/torchmetrics/init.py", line 14, in
from torchmetrics import functional # noqa: E402
File "/usr/local/lib/python3.10/dist-packages/torchmetrics/functional/init.py", line 77, in
from torchmetrics.functional.text.bleu import bleu_score
File "/usr/local/lib/python3.10/dist-packages/torchmetrics/functional/text/init.py", line 30, in
from torchmetrics.functional.text.bert import bert_score # noqa: F401
File "/usr/local/lib/python3.10/dist-packages/torchmetrics/functional/text/bert.py", line 24, in
from torchmetrics.functional.text.helper_embedding_metric import (
File "/usr/local/lib/python3.10/dist-packages/torchmetrics/functional/text/helper_embedding_metric.py", line 26, in
from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase
File "", line 1075, in _handle_fromlist
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1136, in getattr
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1148, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:Downgrade the protobuf package to 3.20.x or lower.
Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates