Key takeaways:
Chinese lawmakers have realized the central role of algorithms in the operation of internet platforms, and defined it as “automated decision-making” in Personal Information Protection Law (PIPL) enacted on 20 Aug. 2021, regulating this technology for the first time.
Under the PIPL, platforms shall assess impact of algorithms in advance, and are liable for results of the decision-making afterwards.
The PIPL expands the right to know of the platform users, and requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users.
China's Personal Information Protection Law (個人信息保護(hù)法), enacted in August 2021, draws the boundaries for Internet platforms conducting automated decision-making through algorithms.
Ⅰ. Background
Chinese Internet platforms, typically TopBuzz and TikTok of ByteDance, are extensively using recommendation algorithms to push content and products to their users.
However, such algorithms, allegedly having interfered with users’ rights of free decisions and thus created moral hazard, are questioned by the public and regulators.
Chinese lawmakers have realized the central role of algorithms in the operation of such platforms, and defined it as “automated decision-making” in Personal Information Protection Law (hereinafter ‘the PIPL’) enacted on 20 Aug. 2021, regulating this technology for the first time.
In accordance with the PIPL, automated decision-making refers to the activities of automatically analyzing and assessing individuals' behavioral habits, hobbies, or financial, health and credit status through computer programs and making decisions thereon. (Article 73)
Prior to that, there were divided opinions over the platforms’ liability for automated decision-making. For example, some people believed that the platforms should not be liable for the results of their automated decision-making algorithms, which were essentially a kind of neutral technology. However, the PIPL clarifies the opposite.
Ⅱ. Restrictions on platforms
1. Regulators directly review the algorithms
As personal information processors, platforms shall audit the compliance of their processing of personal information with laws and administrative regulations on a regular basis. (Article 54)
This requires platforms to periodically audit their algorithmic automated decision-making and other information processing activities.
According to the rule, the regulators can also conduct internal audits on the operation of platforms’ algorithms, instead of external supervision on platforms’ acts and the consequences only.
Accordingly, the regulators make the algorithms as the direct regulating object, which enables the regulators to intervene in the technology and details of automated decision-making.
2. Platforms assess the impact of algorithms in advance
As personal information processors, platforms shall conduct personal information protection impact assessment in advance and record the processing information if they use personal information for automated decision-making. (Article 55)
The assessment by platforms shall cover the following:
A. Whether the purposes, methods or any other aspect of the processing of personal information are lawful, legitimate and necessary;
B. The impact on personal rights and interests and level of risk; and
C. Whether the security protection measures taken are lawful, effective and commensurate with the level of risk.
Accordingly, the platforms shall conduct a prior assessment before algorithms of automated decision-making go live. The risk assessment includes the legitimacy and necessity of the algorithmic automated decision-making, as well as its impact and risk.
Defective algorithmic automated decision-making from platforms may bring harm to citizens' property and personal rights, even to public interests and national security.
Therefore, the negative consequences may impact thousands of users. At that point, even though the platforms are held accountable, it may be difficult to recover the damage that has already been done.
To prevent such a situation, the law establishes a prior assessment system for platforms’ algorithms in an attempt to intervene in the algorithms beforehand.
3. Platforms are liable for the results of the decision-making afterwards
Platforms shall assume the following obligations for results of automated decision-making (Article 24):
A. Platforms shall ensure that the results are fair and impartial
Where personal information processors conduct automated decision-making with personal information, they shall ensure transparency of the decision-making and fairness and impartiality of the results, and shall not give unreasonable differential treatment to individuals in terms of transaction prices or other transaction conditions.
B. Platforms shall provide automated decision-making options not targeting personal characteristics to their users.
Where push-based information delivery or commercial marketing to individuals is conducted by means of automated decision-making, options not targeting at personal characteristics of the individuals or easy ways to refuse to receive shall be provided to the individuals simultaneously.
C. Platforms shall make explanations of the decision-making results.
Where a decision that has a material impact on an individual's rights and interests is made by means of automated decision-making, the individual shall have the right to request the personal information processor to make explanations, as well as the right to refuse the making of decisions by the personal information processor solely by means of automated decision-making.
The rule holds platforms liable for the results of automated decision-making, including:
A. The rule does not recognize the "technology neutrality" defense that has been used by platforms. Platforms should be responsible for the results of the algorithmic automated decision-making and should ensure that the results are fair and reasonable.
B. The rule expands the right to know of the platform users. The users can request the transparency of the automated decision-making results as well as explanations from platforms in case of “a material impact”.
C. The rule requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users, and requires platforms to protect the users' right to know.
III. Our Comments
China has made a breakthrough in the PIPL by adding legal rules for platforms’ algorithms of automated decision-making. However, it still needs to be further refined. For example, the law does not clarify:
A. conditions for platforms to initiate algorithm assessment.
B. whether and to what extent the assessment reports will be made public after platforms evaluate their algorithms, and
C. how platforms should be liable for the damage caused by their algorithmic automated decision-making.
I presume that Chinese regulators are still exploring the possibility of enacting a series of specific regulations to further implement the PIPL.
? 2019-2021 All rights reserved. 北京轉(zhuǎn)創(chuàng)國際管理咨詢有限公司 京ICP備19055770號-1
Beijing TransVenture International Management Consulting Co., Ltd.
地址:梅州市豐順縣留隍鎮(zhèn)新興路881號
北京市大興區(qū)新源大街25號院恒大未來城7號樓1102室
北京市海淀區(qū)西禪寺(華北項(xiàng)目部)
深圳市南山區(qū)高新科技園南區(qū)R2-B棟4樓12室
深圳市福田區(qū)華能大廈
佛山順德區(qū)北滘工業(yè)大道云創(chuàng)空間
汕頭市龍湖區(qū)泰星路9號壹品灣三區(qū)
長沙市芙蓉區(qū)韶山北路139號文化大廈
站點(diǎn)地圖 網(wǎng)站建設(shè):騰虎網(wǎng)絡(luò)
歡迎來到本網(wǎng)站,請問有什么可以幫您?
稍后再說 現(xiàn)在咨詢