bulk-create is just for that: create a lot of objects in an efficient way saving a lot of queries. But that means that the response you get is kind of incomplete. If you do:
>>> categories = Category.objects.bulk_create([
Category(titel="Python", user=user),
Category(titel="Django", user=user),
Category(titel="HTML5", user=user),
])
>>> [x.pk for x in categories]
[None, None, None]
That doesn't mean your categories doesn't have pk, just that the query didn't retrieve them (if the key is an AutoField). If you want the pks for some reason you will need to save the objects in a classic way.
Actually my colleague has suggested the following solution which seems all so obvious now. Add a new column called bulk_ref which you populate with a unique value and insert for every row. Afterwards simply query the table with the bulk_ref set beforehand and voila, your inserted records are retrieved. e.g.:
If the model’s primary key is an AutoField it does not retrieve and
set the primary key attribute, as save() does.
But, there is good news. There has been a couple of tickets talking about bulk_create from memory. The ticket listed above is the most likely to have a solution which will soon be implemented but obviously there is no guarantee on time or if it will ever make it.
Probably the simplest workaround is manually assigning primary keys. It depends on particular case, but sometimes it's enough to start with max(id)+1 from table and assign numbers incrementing on every object. However if several clients may insert records simultaneously some lock may be needed.
The approach suggested by @Or Duan works for PostgreSQL when using bulk_create with ignore_conflicts=False. When ignore_conflicts=True is set then you don't get the values for the AutoField (usually ID) in the returned objects.
I have tried many strategies to get around this limitation of MariaDB/MySQL. The only reliable solution I came up with at the end was to generate the primary keys in the application. DO NOT generate INT AUTO_INCREMENT PK fields yourself, it won't work, not even in a transaction with isolation level serializable, because the PK counter in MariaDB is not protected by transaction locks.
The solution is to add unique UUID fields to the models, generate their values in the model class, and then use that as their identifier. When you save a bunch of models to the database, you still won't get back their actual PK but that's fine, because in subsequent queries you can uniquely identify them with their UUID.
I will share you AUTO_INCREMENT handling in InnoDB(MySQL) and approach to get primary key when bulk_create(Django)
According to bulk_create docIf the model’s primary key is an AutoField it does not retrieve and set the primary key attribute, as save() does, unless the database backend supports it (currently PostgreSQL). so we need to find out the cause of the problem in Django or MySQL before looking for a solution.
The AUTO FIELD in Django is actually AUTO_INCREMENT in MySQL. It used to generate a unique identity for new rows (ref)
You want to bulk_create objects (Django) means insert multiple rows in a single SQL query. But how you can retrieve the most recent automatically generated PK (primary key)? Thanks to LAST_INSERT_ID. It returns first value automatically generated of the most recently executed INSERT statement...This value cannot be affected by other clients, even if they generate AUTO_INCREMENT values of their own. This behavior ensures that each client can retrieve its own ID without concern for the activity of other clients, and without the need for locks or transactions.
I encourage you to read AUTO_INCREMENT Handling in InnoDB and read Django code django.db.models.query.QuerySet.bulk_create to know why Django not support it for MySQl yet. It's interesting. Come back here and comment your idea please.
Next, I will show you sample code:
from django.db import connections, models, transaction
from django.db.models import AutoField, sql
def dict_fetch_all(cursor):
"""Return all rows from a cursor as a dict"""
columns = [col[0] for col in cursor.description]
return [
dict(zip(columns, row))
for row in cursor.fetchall()
]
class BulkQueryManager(models.Manager):
def bulk_create_return_with_id(self, objs, batch_size=2000):
self._for_write = True
fields = [f for f in self.model._meta.concrete_fields if not isinstance(f, AutoField)]
created_objs = []
with transaction.atomic(using=self.db):
with connections[self.db].cursor() as cursor:
for item in [objs[i:i + batch_size] for i in range(0, len(objs), batch_size)]:
query = sql.InsertQuery(self.model)
query.insert_values(fields, item)
for raw_sql, params in query.get_compiler(using=self.db).as_sql():
cursor.execute(raw_sql, params)
raw = "SELECT * FROM %s WHERE id >= %s ORDER BY id DESC LIMIT %s" % (
self.model._meta.db_table, cursor.lastrowid, cursor.rowcount
)
cursor.execute(raw)
created_objs.extend(dict_fetch_all(cursor))
return created_objs
class BookTab(models.Model):
name = models.CharField(max_length=128)
bulk_query_manager = BulkQueryManager()
class Meta:
db_table = 'book_tab'
def test():
x = [BookTab(name="1"), BookTab(name="2")]
create_books = BookTab.bulk_query_manager.bulk_create_return_with_id(x)
print(create_books) # [{'id': 2, 'name': '2'}, {'id': 1, 'name': '1'}]
The idea is using cursor to execute raw insert sql and then get back created_records. According to AUTO_INCREMENT handling in InnoDB, it make sure that there will be no records interrupting your objs from PK cursor.lastrowid - len(objs) + 1 to cursor.lastrowid (cursor.lastrowid).
Bonus: It's running production in my company. But you need to care about size affect that why Django not support it.