欧美日韩国产一二区高清_日韩精品久久最新一区二区三区_亚洲精品成人456在线播放视频在线_日本熟日本熟妇中文在线_国产盗摄宾馆不卡一区二区_色综合色综合色综合最新网站_日韩精品专区av无码精品_亚洲精品福利成年人 jinv tv_欧美性爱操你啦免费观看_永久免费人成网ww555kkk手机


管理培訓(xùn)搜索
18318889481 13681114876

法律
| How Does China Address Platform Accountability in Algorithmic Decision-Making?當(dāng)前您所在的位置:首頁 > 法律 > 司法研究所 > 司法觀察

avatar

 

Key takeaways:

  • Chinese lawmakers have realized the central role of algorithms in the operation of internet platforms, and defined it as “automated decision-making” in Personal Information Protection Law (PIPL) enacted on 20 Aug. 2021, regulating this technology for the first time.

  • Under the PIPL, platforms shall assess impact of algorithms in advance, and are liable for results of the decision-making afterwards.

  • The PIPL expands the right to know of the platform users, and requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users.


China's Personal Information Protection Law (個人信息保護(hù)法), enacted in August 2021, draws the boundaries for Internet platforms conducting automated decision-making through algorithms.

Ⅰ. Background

Chinese Internet platforms, typically TopBuzz and TikTok of ByteDance, are extensively using recommendation algorithms to push content and products to their users.

However, such algorithms, allegedly having interfered with users’ rights of free decisions and thus created moral hazard, are questioned by the public and regulators.

Chinese lawmakers have realized the central role of algorithms in the operation of such platforms, and defined it as “automated decision-making” in Personal Information Protection Law (hereinafter ‘the PIPL’) enacted on 20 Aug. 2021, regulating this technology for the first time.

In accordance with the PIPL, automated decision-making refers to the activities of automatically analyzing and assessing individuals' behavioral habits, hobbies, or financial, health and credit status through computer programs and making decisions thereon. (Article 73)

Prior to that, there were divided opinions over the platforms’ liability for automated decision-making. For example, some people believed that the platforms should not be liable for the results of their automated decision-making algorithms, which were essentially a kind of neutral technology. However, the PIPL clarifies the opposite.

Ⅱ. Restrictions on platforms

1. Regulators directly review the algorithms

As personal information processors, platforms shall audit the compliance of their processing of personal information with laws and administrative regulations on a regular basis. (Article 54)

This requires platforms to periodically audit their algorithmic automated decision-making and other information processing activities.

According to the rule, the regulators can also conduct internal audits on the operation of platforms’ algorithms, instead of external supervision on platforms’ acts and the consequences only.

Accordingly, the regulators make the algorithms as the direct regulating object, which enables the regulators to intervene in the technology and details of automated decision-making.

2. Platforms assess the impact of algorithms in advance 

As personal information processors, platforms shall conduct personal information protection impact assessment in advance and record the processing information if they use personal information for automated decision-making. (Article 55)

The assessment by platforms shall cover the following:

A. Whether the purposes, methods or any other aspect of the processing of personal information are lawful, legitimate and necessary;

B. The impact on personal rights and interests and level of risk; and

C. Whether the security protection measures taken are lawful, effective and commensurate with the level of risk.

Accordingly, the platforms shall conduct a prior assessment before algorithms of automated decision-making go live. The risk assessment includes the legitimacy and necessity of the algorithmic automated decision-making, as well as its impact and risk.

Defective algorithmic automated decision-making from platforms may bring harm to citizens' property and personal rights, even to public interests and national security.

Therefore, the negative consequences may impact thousands of users. At that point, even though the platforms are held accountable, it may be difficult to recover the damage that has already been done.

To prevent such a situation, the law establishes a prior assessment system for platforms’ algorithms in an attempt to intervene in the algorithms beforehand.

3. Platforms are liable for the results of the decision-making afterwards

Platforms shall assume the following obligations for results of automated decision-making (Article 24):

A. Platforms shall ensure that the results are fair and impartial

Where personal information processors conduct automated decision-making with personal information, they shall ensure transparency of the decision-making and fairness and impartiality of the results, and shall not give unreasonable differential treatment to individuals in terms of transaction prices or other transaction conditions.

B. Platforms shall provide automated decision-making options not targeting personal characteristics to their users.

Where push-based information delivery or commercial marketing to individuals is conducted by means of automated decision-making, options not targeting at personal characteristics of the individuals or easy ways to refuse to receive shall be provided to the individuals simultaneously.

C. Platforms shall make explanations of the decision-making results.

Where a decision that has a material impact on an individual's rights and interests is made by means of automated decision-making, the individual shall have the right to request the personal information processor to make explanations, as well as the right to refuse the making of decisions by the personal information processor solely by means of automated decision-making.

The rule holds platforms liable for the results of automated decision-making, including:

A. The rule does not recognize the "technology neutrality" defense that has been used by platforms. Platforms should be responsible for the results of the algorithmic automated decision-making and should ensure that the results are fair and reasonable.

B. The rule expands the right to know of the platform users. The users can request the transparency of the automated decision-making results as well as explanations from platforms in case of “a material impact”.

C. The rule requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users, and requires platforms to protect the users' right to know.

III. Our Comments

China has made a breakthrough in the PIPL by adding legal rules for platforms’ algorithms of automated decision-making. However, it still needs to be further refined. For example, the law does not clarify:

A. conditions for platforms to initiate algorithm assessment.

B. whether and to what extent the assessment reports will be made public after platforms evaluate their algorithms, and 

C. how platforms should be liable for the damage caused by their algorithmic automated decision-making.

I presume that Chinese regulators are still exploring the possibility of enacting a series of specific regulations to further implement the PIPL.


TESG
企業(yè)概況
聯(lián)系我們
專家顧問
企業(yè)文化
黨風(fēng)建設(shè)
核心團(tuán)隊(duì)
資質(zhì)榮譽(yù)
合規(guī)監(jiān)管
部門職責(zé)
轉(zhuǎn)創(chuàng)中國
加入轉(zhuǎn)創(chuàng)
經(jīng)濟(jì)合作
智庫專家
質(zhì)量保證
咨詢流程
聯(lián)系我們
咨詢
IPO咨詢
投融資咨詢
會計(jì)服務(wù)
績效管理
審計(jì)和風(fēng)險控制
競爭戰(zhàn)略
審計(jì)與鑒證、估價
企業(yè)管理咨詢
人力資源戰(zhàn)略與規(guī)劃
融資與并購財務(wù)顧問服務(wù)
投資銀行
企業(yè)文化建設(shè)
財務(wù)交易咨詢
資本市場及會計(jì)咨詢服務(wù)
創(chuàng)業(yè)與私營企業(yè)服務(wù)
公司治理、合規(guī)與反舞弊
國企改革
價值辦公室
集團(tuán)管控
家族企業(yè)管理
服務(wù)
數(shù)據(jù)分析
資信評估
投資咨詢
風(fēng)險及控制服務(wù)
管理咨詢
轉(zhuǎn)型升級服務(wù)
可行性研究咨詢服務(wù)
民企與私人客戶服務(wù)
解決方案
內(nèi)控
稅收內(nèi)部控制
稅收風(fēng)險管理
內(nèi)控管理師
內(nèi)部控制咨詢
信用研究
信用法制中心
風(fēng)險與內(nèi)控咨詢
無形資產(chǎn)內(nèi)控
企業(yè)內(nèi)控審計(jì)
內(nèi)部控制服務(wù)
內(nèi)部控制評價
內(nèi)部控制體系建設(shè)
內(nèi)部控制智庫
上市公司內(nèi)控
上市公司獨(dú)立董事
投行
M&A
資本市場
SPAC
科創(chuàng)板
金融信息庫
IPO咨詢
北交所
ASX
SGX
HKEX
金融服務(wù)咨詢
信用評級
上海證券交易所
NYSE
深圳證券交易所
審計(jì)
審計(jì)資料下載
法證會計(jì)
審計(jì)事務(wù)
審計(jì)及鑒證服務(wù)
審計(jì)咨詢
反舞弊中心
內(nèi)部控制審計(jì)
內(nèi)部審計(jì)咨詢
國際審計(jì)
合規(guī)
銀行合規(guī)專題
合規(guī)管理建設(shè)年
海關(guān)與全球貿(mào)易合規(guī)
數(shù)據(jù)合規(guī)專題
反腐敗中心
反壟斷合規(guī)
反舞弊中心
國際制裁
企業(yè)合規(guī)中心
信用合規(guī)專題
證券合規(guī)專題
合規(guī)中心
金融合規(guī)服務(wù)
反洗錢中心
全球金融犯罪評論
行業(yè)
新基建
文化、體育和娛樂業(yè)
電信、媒體和技術(shù)(TMT)
投城交通事業(yè)部
房地產(chǎn)建筑工程
醫(yī)療衛(wèi)生和社會服務(wù)
可持續(xù)發(fā)展與環(huán)保
全球基礎(chǔ)材料
大消費(fèi)事業(yè)部
金融服務(wù)業(yè)
化學(xué)工程與工業(yè)
一帶一路
智慧生活與消費(fèi)物聯(lián)
數(shù)字經(jīng)濟(jì)發(fā)展與檢測
食品開發(fā)與營養(yǎng)
先進(jìn)制造事業(yè)部
能源資源與電力
消費(fèi)與工業(yè)產(chǎn)品
運(yùn)輸與物流
酒店旅游餐飲
科學(xué)研究與技術(shù)服務(wù)
政府及公共事務(wù)
化妝品與個人護(hù)理
一二三產(chǎn)融合
生物醫(yī)藥與大健康
新能源汽車與安全產(chǎn)業(yè)
法律
法律信息庫
稅法與涉稅服務(wù)
數(shù)字法治與網(wǎng)絡(luò)安全
勞動與人力資源法律
金融與資本市場法律
司法研究所
公司法專題
私募股權(quán)與投資基金
債務(wù)重組與清算/破產(chǎn)
轉(zhuǎn)創(chuàng)國際法律事務(wù)所
轉(zhuǎn)創(chuàng)法信事務(wù)所
財稅
法務(wù)會計(jì)
管理會計(jì)案例
決策的財務(wù)支持
家族資產(chǎn)和財富傳承
財稅法案例庫
資產(chǎn)評估
財稅信息庫
會計(jì)準(zhǔn)則
財務(wù)研究所
財政稅收
財政研究所
會計(jì)研究所
財稅實(shí)務(wù)
投資咨詢
財務(wù)管理咨詢
審計(jì)事務(wù)
管理
轉(zhuǎn)創(chuàng)智庫
金融研究所
企業(yè)管理研究所
中國企業(yè)國際化發(fā)展
經(jīng)濟(jì)與產(chǎn)業(yè)研究
氣候變化與可持續(xù)
ESG中心
管理咨詢
轉(zhuǎn)創(chuàng)
咨詢業(yè)數(shù)據(jù)庫
轉(zhuǎn)創(chuàng)網(wǎng)校
生物醫(yī)藥信息庫
建筑工程庫
轉(zhuǎn)創(chuàng)首都
轉(zhuǎn)創(chuàng)教育
轉(zhuǎn)創(chuàng)國際廣東 官網(wǎng)
科研創(chuàng)服
中國轉(zhuǎn)創(chuàng)雜志社
創(chuàng)新創(chuàng)業(yè)
轉(zhuǎn)型升級
技術(shù)轉(zhuǎn)移中心
轉(zhuǎn)創(chuàng)中國
中外
粵港澳大灣區(qū)
中國-東盟
一帶一路
澳大利亞
俄羅斯
新加坡
英國
加拿大
新西蘭
香港
美國
中非平臺
開曼群島
法國
歐洲聯(lián)盟
印度
北美洲
18318889481 13681114876
在線QQ
在線留言
返回首頁
返回頂部
留言板
發(fā)送