api 规则定义_API有规则,而且功能强大
api 規則定義
Disclaimer: I am an independent researcher @Taraaz with no affiliation with any of the companies mentioned below.
免責聲明:我是 @Taraaz 的獨立研究 人員 ,與以下提到的任何公司均無關聯。
Last month, my friend posted a story on Instagram. It was about boycotting a Unilever-made skin lightening product from India which goes by the brand name “Fair & Lovely.” The campaign’s goal? To bring attention to the larger problem of colorism in India.
上個月,我的朋友在Instagram上發布了一個故事。 這是關于抵制聯合利華(Unilever)生產的印度亮膚產品,該產品的商標為“ Fair&Lovely” 。 該運動的目標? 引起人們對印度色彩問題的關注。
I’m not Indian. But the campaign’s message resonated with me. In Iran, where I grew up, I too encountered similar “beauty” products that claimed to be able to lighten the skin of those who used them. I took them for granted when I was a kid. But these days, in the wake of the Black Lives Matter movement, there’s been a moment of awakening for many from different countries who think about racism and colorism at home.
我不是印度人。 但是競選活動的信息引起了我的共鳴。 在我長大的伊朗,我也遇到過類似的“美容”產品,這些產品聲稱能夠減輕使用它們的人的皮膚。 我小時候把它們視為理所當然。 但是這些天來,隨著“黑人生活問題”運動的到來,來自不同國家的許多人在家里都考慮到種族主義和色彩主義,這是一個覺醒的時刻。
While reading through tweets from the campaign, I began to think about the emotion behind these tweets. I wondered how Unilever would perceive and react to these tweets? Of course, their social media team wouldn’t be able to read every single tweet. But perhaps they use social media analysis tools –powered by emotion recognition technologies – to get a sense of people’s demand.
在閱讀競選活動中的推文時,我開始考慮這些推文背后的情感。 我想知道聯合利華會如何看待并回應這些推文? 當然,他們的社交媒體團隊無法閱讀每條推文。 但是也許他們使用由情感識別技術支持的社交媒體分析工具來了解人們的需求。
I’m a researcher in technology and human rights. My job is to understand how technical designs impact human rights. I know that one of the promises of text-based emotion analysis tools is to help companies to understand customer satisfaction based on social media engagement.
我是技術與人權研究人員。 我的工作是了解技術設計如何影響人權。 我知道,基于文本的情感分析工具的一項承諾是幫助公司基于社交媒體參與來了解客戶滿意度。
That’s why I decided to use the example of “Fair & Lovely” to scrutinize off-the-shelf machine learning-based emotion analysis APIs. How do these practices — which are now the norm among major brands — perform in a specific case such as this? In particular, I wondered whether the positive sentiment of the phrase “Fair & Lovely” might trick the emotion analysis tool and lead to the misclassification of a sentence’s sentiment, even if the overall sentiment of the sentence may not be positive.
這就是為什么我決定使用“公平可愛”的示例來仔細研究基于現成機器學習的情緒分析API。 這些做法(現在是主要品牌中的普遍做法)在這樣的特定情況下如何表現? 我特別想知道,“ Fair&Lovely”一詞的積極情緒是否會欺騙情緒分析工具,并導致句子情緒的錯誤分類 ,即使句子的整體情緒可能不是積極的。
This question led me to write this blog post, especially for developers who use machine learning technologies as a service (MLaaS) and also for my fellow human rights practitioners who are interested in examining human rights implications of tech companies’ third-party relationships.
這個問題使我撰寫了這篇博客文章,特別是對于使用機器學習技術即服務(MLaaS)的開發人員,以及對研究技術公司的第三方關系對人權的影響感興趣的我的其他人權從業人員。
- I’ll tell you why APIs terms are so important to understand, and what have been some misuses of APIs in the past few years. 我將告訴您為什么理解API術語如此重要,以及過去幾年對API的一些誤用。
I’ll choose IBM Tone Analyzer API and ParallelDots Text Analysis Emotion API to test their result on tweets about Unilever’s “Fair & Lovely” product. I’ll walk you through those APIs’ developers’ policies, terms of service, APIs documents, and show you what could be some criteria to consider before choosing that API.
我將選擇IBM Tone Analyzer API和ParallelDots Text Analysis Emotion API來測試有關聯合利華“ Fair&Lovely”產品的推文上的結果。 我將向您介紹這些API的開發人員政策,服務條款,API文檔,并向您展示在選擇該API之前可能要考慮的一些標準。
- I’ll provide a set of recommendations for developers who want to use general-purpose APIs for a specific domain in a responsible manner. I’ll also provide recommendations for auditors and human rights practitioners who study companies’ third-party relationships. 我將為想要以負責任的方式針對特定域使用通用API的開發人員提供一系列建議。 我還將為研究公司的第三方關系的審計師和人權從業人員提供建議。
So, let’s say you are a developer or a social media analyst, and you are approached by Unilever to analyze the emotion behind customers’ social media engagement. What do you do?
因此,假設您是開發人員或社交媒體分析師,聯合利華(Unilever)會聯系您分析客戶參與社交媒體背后的情感。 你是做什么?
As a hypothetical, we will assume that you don’t have the necessary skills, data, and computation power to build a whole custom machine learning model, nor do you want to use any pre-trained model. Instead, you choose the easiest route: an off-the-shelf general-purpose emotion-analysis API.
作為假設,我們將假設您沒有構建完整的自定義機器學習模型所需的技能,數據和計算能力,也不希望使用任何經過預先訓練的模型。 相反,您選擇了最簡單的方法:現成的通用情感分析API。
If that’s the case, what would be your criteria to choose and use these APIs in a responsible manner?
如果是這樣,您以負責任的方式選擇和使用這些API的標準是什么?
API有規則,而且功能強大 (APIs have rules — and power 🔍)
First, the basics. An Application Programming Interface (API) is what helps different software applications interact with each other. It allows one application to make a request (either data or service) and the other application respond to it. For example, if you are a social media company and want researchers to use your data to conduct research, you give them access to those data via an API. If you want IoT devices at home to interact with each other (for example your smart lamp reacts to events on your Google calendar) then you connect those services through APIs.
首先,基礎知識。 應用程序編程接口(API)可以幫助不同的軟件應用程序相互交互。 它允許一個應用程序發出請求(數據或服務),而另一個應用程序對此作出響應。 例如,如果您是一家社交媒體公司,并且希望研究人員使用您的數據進行研究,則可以通過API授予他們訪問這些數據的權限。 如果您希望家里的IoT設備相互交互(例如,智能燈對Google日歷中的事件做出React),則可以通過API連接這些服務。
But as with any kind of interaction, there need to be rules between those services before starting working with each other. These rules are set by APIs policies, developers’ policies, and Terms of Service.
但是,與任何類型的交互一樣,在開始彼此合作之前,這些服務之間需要有規則。 這些規則由API政策,開發者政策和服務條款設置。
So far, so good. But when you give it more thought, you realize that those services make agreements between each other to provide services for you, as a user, or to handle your data, without you fully understand how they reach agreements.
到目前為止,一切都很好。 但是,當您多加考慮時,您會意識到這些服務之間會相互達成協議以為您 (作為用戶)提供服務或處理您的數據,而無需您完全了解它們如何達成協議。
It’s kind of bizarre, right?
有點奇怪,對吧?
Here are a couple of reminders of why this sort of thing is important.
這里有一些提醒,說明這種事情為什么很重要。
1) You remember Cambridge Analytica and Facebook, right? (As a refresher, 87 million users’ information was “improperly” shared with Cambridge Analytica to analyze and manipulate Facebook users’ political behavior). Long story short, the underlying reason for such privacy-invasive data sharing practice was because of the abuse of Facebook’s APIs. As a result, Facebook restricted developers’ data access by making changes in their APIs policies.
1)您還記得Cambridge Analytica和Facebook, 對嗎? (作為復習,與劍橋分析公司“不當”共享了8700萬用戶的信息,以分析和操縱Facebook用戶的政治行為)。 長話短說,這種侵犯隱私的數據共享做法的根本原因是因為濫用了Facebook的API。 結果,Facebook通過更改其API策略來限制開發人員的數據訪問。
2) There are also concerns when ML APIs are used as analytical services. In this case, developers are the ones who have the data and go to big tech companies’ ML APIs to process that data (MLaaS). Joy Buolamwini and Timnit Gebru’s Gender Shades study revealed significant racial and gender discrimination of several Facial Recognition APIs. In fact, as a result, big tech companies limited providing their APIs to law enforcement agencies in the US (who knows about their business relationship with other countries though…? 🤷🏻?♀?)
2)當將ML API用作分析服務時,也存在一些問題。 在這種情況下,開發人員就是擁有數據并前往大型科技公司的ML API來處理該數據(MLaaS)的人。 Joy Buolamwini和Timnit Gebru的“ 性別陰影”研究顯示了幾種面部識別API的明顯種族和性別歧視。 實際上,結果是,大型科技公司將其API提供給美國的執法機構(盡管他們知道他們與其他國家的業務關系…?…?🤷🏻?)
But what about developers’ responsibilities who want to use tech companies’ general-purpose services? Is there any guidance to help them choose and use those ML APIs responsibly in their specific domains?
但是,想要使用科技公司的通用服務的開發人員的責任又如何呢? 是否有任何指南可幫助他們在其特定領域中負責任地選擇和使用那些ML API?
情緒分析API:IBM Tone Analyzer或ParalletDots文本分析? (An emotion analysis API: IBM Tone Analyzer or ParalletDots Text Analysis?)
As a developer, if you don’t want to build an ML system from scratch nor do you want to use a pre-trained model, the other option is to use cloud-based ML APIs. Everything is ready to go: you set up a developer account and receive API credentials, you provide input data, the service provider works its “magic,” and you receive the results as output. Easy! You don’t even need any knowledge about data science and machine learning to be able to integrate that API with your product. Or at least, this is how companies market their services.
作為開發人員,如果您不想從頭開始構建ML系統,也不想使用預先訓練的模型,那么另一種選擇是使用基于云的ML API。 一切準備就緒:您設置了開發人員帳戶并獲得了API憑證,提供了輸入數據,服務提供者發揮了“魔力”,然后將結果作為輸出接收。 簡單! 您甚至不需要任何有關數據科學和機器學習的知識,就能將該API與您的產品集成。 至少,這就是公司營銷其服務的方式。
As a developer, you have an obvious set of criteria to choose a service, right: criteria such as accuracy, cost, and speed. But what if you wanted to pick your ML API service based on other criteria, such as privacy, security, fairness, and transparency? What process do you go through? What do you check?
作為開發人員,您有一套顯而易見的選擇服務標準,正確的是:諸如準確性,成本和速度之類的標準。 但是,如果您想根據其他標準(例如隱私,安全性,公平性和透明度)來選擇ML API服務,該怎么辦? 您要經歷什么過程? 你查什么?
Let’s go back to the “Fair & Lovely” tweets. Putting myself of our hypothetical developer, I collected several hundred tweets about “fair & lovely” in the English language using Twint. Next, I looked at RapidAPI, a platform that helps developers to manage and compare different APIs, and picked IBM Watson Tone Analyzer and ParrallelDots as the best options. Both services promise to infer emotions including fear, anger, joy, happiness, etc. from tweets.
讓我們回到“公平可愛”的推文上 。 讓我成為假設的開發人員,我使用Twint收集了數百條有關英語中“公平和可愛”的推文。 接下來,我查看了RapidAPI (一個可幫助開發人員管理和比較不同API的平臺),并選擇了IBM Watson Tone Analyzer和ParrallelDots作為最佳選擇。 兩種服務都承諾從推文中推斷出包括恐懼,憤怒,喜悅,幸福等情緒。
Then I registered with both services and received API credentials for free developer accounts. IBM’s free “Lite” account provides 2500 API calls per month; ParallelDots is free for 1000 API hits/day.
然后,我同時注冊了這兩項服務,并獲得了免費的開發人員帳戶的API憑據。 IBM的免費“ Lite”帳戶每月提供2500次API調用。 ParallelDots免費提供每天1000次API點擊。
Finally, I ran the experiments below. These are a result of providing my corpus of “fair & lovely” tweets as input and then gathering the APIs’ output. You can see more examples in this spreadsheet.
最后,我進行了以下實驗。 這些是提供我的“公平和可愛”推文集作為輸入,然后收集API的輸出的結果。 您可以在此電子表格中看到更多示例。
Please note the drastically different results of the two services.
請注意,兩種服務的結果截然不同。
I also changed the word “fair & lovely” to more neutral phrases such as “your product” and “this product.” The output result changed. However, from a human analytical standpoint the message — and its sentiment — are the same.
我還將“公平而可愛”一詞改為了更中性的詞組,例如“您的產品”和“此產品”。 輸出結果已更改。 但是,從人類分析的角度來看,信息及其情感是相同的。
At this point, I wouldn’t use either of these tools for this specific case! You tell me if the sentence “Unilever- cancel fair & lovely -sign the petition!” is joyous! 🤦🏻?♀?
在這一點上,在這種情況下,我不會使用任何一種工具! 你告訴我,如果句子“ Unilever-取消公平和可愛-簽署請愿書!” 很高興! ♀?♀?
However, let’s say our hypothetical developer still thinks that there are benefits for using these tools.
但是,假設我們的假設開發人員仍然認為使用這些工具會有好處。
In that case, we’d need to take into account the following criteria. I have to say this list is very preliminary and by no means comprehensive. But at least it gives you a sense of what to look for if you, as a developer, decide to use these tools.
在這種情況下,我們需要考慮以下條件。 我必須說這個清單是非常初步的,絕不是全面的。 但是至少,如果您作為開發人員決定使用這些工具,它至少可以使您了解要尋找的內容。
注冊:隱私政策和服務條款 (Registration: privacy policies and terms of service)
When you want to sign up for a developer account, always read the complete Terms of Service (ToS) and privacy policy. Crucially, this is different from a company website’s terms and policies.
當您想要注冊開發者帳戶時,請務必閱讀完整的服務條款(ToS)和隱私權政策。 至關重要的是,這與公司網站的條款和政策不同。
In particular, be vigilant for information about how the data you provide as input is going to be handled. There is a service called Polisis that helps you to compare policies from different service providers (it’s not perfect, but it is still helpful).
尤其要警惕有關如何處理您作為輸入提供的數據的信息。 有一項名為Polisis的服務可以幫助您比較不同服務提供商的策略(雖然不完美,但仍然很有用)。
Read the developer’s privacy policy and product-specific policy to understand what data from you (as an account holder) the company collects, how they protect them? Do they encrypt the data at rest and in transit? Is the data they collect personally identifiable? Do they define what they mean by metadata? Do they collect the data you provide as an input to the service? Do they retain it? For how long? Do they keep the log files?
閱讀開發者的隱私權政策和特定于產品的政策,以了解公司從您(作為帳戶持有人)那里收集了哪些數據,它們如何保護它們? 他們是否對靜態和傳輸中的數據進行加密? 他們收集的數據可以個人識別嗎? 它們是否定義了元數據的含義? 他們是否收集您提供的數據作為服務的輸入? 他們保留嗎? 多長時間? 他們是否保留日志文件?
Here’s a comparison between the policies of the two services. (For the rest of this post, my comparisons between the details of IBM Tone Analyzer and ParallelDots will appear in gray text boxes like the one below, featuring summaries of what I found in their posted policies and documentation).
這是兩種服務的策略之間的比較。 (對于本文的其余部分,我在IBM Tone Analyzer和ParallelDots的詳細信息之間進行的比較將顯示在下面的灰色文本框中,其中包含我在其發布的策略和文檔中所得到的摘要)。
IBM Tone Analyzer: When you want to create a developer account, IBM points you to their general privacy policy that lists everything from website visit to using cloud services. There are some vague statement such as:? "IBM may also share your personal information with selected partners to help us provide you ..." Who are their partners though?!? Or "We will not retain personal information longer than necessary to fulfill the purposes for which it is processed." What is "longer than necessary?"If you want specific information about data collection and retention via Tone Analyzer API go to the product document page. Some relevant information includes:? "Request logging is disabled for the Tone Analyzer service.the service does not log or retain data from requests and responses."? The service "processes but does not store users' data. Users of the Tone Analyzer service do not need to take any action to identify, protect, or delete their data for this service."ParallelDots: The website says that ParallelDots “protects your data and follow the GDPR compliance guidelines to the last word.” But it doesn’t go further? Which data? Metadata or developers’ information or users’ data? ? ParallelDots' ToS says "you may not access the services for purposes of monitoring their availability, performance or functionality, or for any other benchmarking or competitive purposes" This is bizarre to me, does that mean I broke their Tos?!文獻資料 (Documentation)
If a company has already provided documentation such as Model Card for Model Reporting for that specific API, read it before using the service. If not, good luck on finding such important information! Look at the API’s documents and dig in for information about API security, background research. papers, training data, architecture and algorithms, evaluation metrics, and recommended use and not-to-use cases.
如果公司已經提供了該特定API的文檔,例如用于模型報告的模型卡 ,請在使用該服務之前先閱讀該文檔。 如果沒有,請找到這些重要信息,祝您好運! 查看API的文檔,并深入了解有關API安全性和背景研究的信息。 論文,培訓數據,體系結構和算法,評估指標以及推薦使用和不使用的案例。
🔐 API Security. During the past few years, there have been numerous examples of data breaches via the use of insecure APIs. It’s an API provider’s responsibility to be able to detect security vulnerabilities, identify suspicious requests, provide encrypted traffic, and traffic monitoring methods. Make sure an API provider has already put these security practices in place — and read more about APIs security here.
🔐API 安全性。 在過去的幾年中,通過使用不安全的API出現了許多數據泄露的例子。 API提供者的職責是能夠檢測安全漏洞,識別可疑請求,提供加密的流量以及流量監控方法。 確保API提供者已經實施了這些安全性實踐-并在此處閱讀有關API安全性的更多信息。
Here’s another comparison, this time comparing API security:
這是另一個比較,這次比較的是API安全性:
IBM Tone Analyzer: ? IBM suggests developers use IBM Cloud Activity Tracker with LogDN to monitor the activity of an IBM Cloud account and investigate abnormal activity.? The service also requires a strong password and sends you a verification code to confirm your developer account.ParallelDots: There is no information about API security in the API document page. However they mentioned that they only provide encrypted access to premium content. ? For registration, developers are not required to set a strong password, however, ParallelDots sends you a verification email to confirm your account.🌏 Accurate and Precise… but for who? In the example of “Fair & Lovely,” language plays an important role. English-only tweets don’t provide an accurate understanding of discussions around the product because it’s not one restricted to a single language.
🌏 準確而精確……但是對于誰呢? 在“公平與可愛”的示例中,語言扮演著重要角色。 僅限英文的推文無法提供對產品討論的準確理解,因為它不僅限于一種語言。
Check the API to see if it support other languages. If so, what is the accuracy rate for different languages? Service providers often say they support multiple languages, but don’t provide broken down details about accuracy and other evaluation metrics for each language. Dig in the API document, background research pages, and try to find metrics for different sub-categories.
檢查API以查看其是否支持其他語言。 如果是這樣, 不同語言的準確率是多少? 服務提供商經常說他們支持多種語言,但沒有提供每種語言的準確性和其他評估指標的詳細信息。 在API文檔,背景研究頁面中進行挖掘,并嘗試查找不同子類別的指標。
In our case, here’s what I found:
在我們的案例中,這是我發現的內容:
IBM Tone Analyzer: The company listed 11 supported languages. However, there is no breakdown information about accuracy or other evaluation metrics based on different languages.ParallelDots: The company listed 14 supported languages. However, there is no information about accuracy or other evaluation metrics based on different languages.??Suggested (Not) Use Cases. Companies also provide guidance about the suggested use of their services, but sometimes the use case can be dangerous or unethical. Companies need to be transparent about the cases in which developers should not use their services.
?? 建議(非)用例。 公司還提供有關建議使用其服務的指導,但有時用例可能是危險的或不道德的。 公司必須對開發人員不應使用其服務的情況保持透明。
IBM Tone Analyzer: Tone Analyzer use cases according to the document page include predicting customer satisfaction in support forums; predicting customer satisfaction in Twitter responses; predicting online dating matches; Predicting TED Talk applause. There is no indication about Not-to-Use Cases.ParallelDots: There are two suggested use cases: “target[ing] detractors to improve service to them” and “brand-watching.” There is no indication about Not-to-Use Cases.?? Fairness Practices. In the past couple of years, researchers and practitioners have raised awareness about discriminatory outcomes of machine learning systems. They’ve provided numerous toolkits to help companies assess the human rights implications of their tools and be transparent about potential social risks. I keep track of different initiatives, papers, toolkits here.
Fair? 公平實踐。 在過去的幾年中,研究人員和從業人員提高了對機器學習系統歧視性結果的認識。 他們提供了許多工具包,以幫助公司評估其工具對人權的影響,并對潛在的社會風險保持透明。 我在這里跟蹤不同的計劃,論文和工具包。
But how many companies provide that information for their specific ML APIs?
但是,有多少公司為其特定的ML API提供該信息?
IBM Tone Analyzer: IBM provides information about background research, data collection process (twitter data), and data annotation method. However there is no mention of potential discrminatory outcomes and no breakdown information about demographics and measurement based on different sub-groups (language, gender, age, etc.)? Fun fact: IBM Research is one of the pioneers in providing fairness and explanability toolkits (check out IBM 360). They also proposed using FactSheets for every ML model to show the origin of training datasets, model specifications, and use cases. But when it comes to their own model, you rarely find such information on their product pages! This reminded me of the great piece of poem from Nizami, basically meaning first fix your own flaws before being too critical of others:??? ???? ???? ? ????? ???? ???? ??? ?? ??????? ????ParallelDots: I found no information about fairness practices.🛠 Maintenance and Updates
🛠 維護和更新
IBM Tone Analyzer: The company frequently update the service and provides information about the updates. However, in some updates there are generic sentecs including "The service was also updated for internal changes and improvements." What are those internal changes and improvements?ParallelDots: I couldn't find information about updates and maintenance.💬 Developers Community. Communities of developers(via Slack Workspace, Stack Overflow, GitHub, etc) help share feedback, interact with themselves and service providers, and raise issues around privacy, security, fairness, explainability about a certain product and in a specific domain.
💬開發者社區。 開發者社區(通過Slack Workspace,Stack Overflow,GitHub等)有助于共享反饋,與他們自己和服務提供商進行交互,并引發有關特定產品和特定領域的隱私,安全性,公平性,可解釋性的問題。
IBM Tone Analyzer: IBM Watson provides Slack workspace (there is no dedicated channel for ethical uses, however) and a Stack Overflow developers community. The Github page for the Tone Analyzer is here. ParallelDots: The company has a GitHub page.推薦建議 (Recommendations)
給開發者 (To developers)
Don’t use machine learning APIs blindly, especially if they are black boxes. In addition to criteria such as cost, speed, and accuracy — as marketed by a service provider — consider criteria related to fairness, privacy, security, and transparency.
不要盲目使用機器學習API,尤其是當它們是黑匣子時。 除服務提供商所銷售的成本,速度和準確性等標準外,還要考慮與公平性,私密性,安全性和透明性相關的標準。
If it’s not documented, reach out to service providers and ask them whether they have conducted any fairness audits. It’s their responsibility to publish this information online or walk you through it. Use your buying power, they’ll listen!
如果沒有記錄,請與服務提供商聯系,詢問他們是否進行了任何公平性審核。 他們有責任在網上發布此信息或引導您進行逐步了解。 使用您的購買力,他們會聽的!
Think about the domain for which you will be using the tool. Who might be affected disproportionally by the outcome of integrating a given ML API with your product? Think about gender, race, religion, age, language, accent, country, socio-economic status (read this to learn more about vulnerable groups who are protected under human rights conventions). I keep track of different ML assessment tools here; you might find them helpful in your assessment process.
考慮您將使用該工具的域。 誰可能會受到結果的不成比例的影響 將給定的ML API與您的產品集成? 想想性別,種族,宗教,年齡,語言,口音,國家,社會經濟地位(讀這更多地了解誰是人權公約所保護的弱勢群體)。 我在這里跟蹤不同的機器學習評估工具; 您可能會發現它們對您的評估過程有幫助。
Try to find benchmark datasets that relate to discriminatory outcomes of ML projects (Equity Evaluation Corpus is an example of a benchmark dataset used to examine biases in sentiment analysis systems). Reach out to people who are involved in creating such benchmarks and ask them for help to scrutinize the API in your specific domain. Check out FAccT conference directory for finding people who work on these issues.
嘗試查找與ML項目的歧視性結果相關的基準數據集 (股權評估語料庫是用于檢查情緒分析系統中偏差的基準數據集的示例)。 與參與創建此類基準測試的人員聯系,并請他們尋求幫助來仔細檢查您特定域中的API。 查看FAccT會議目錄,查找從事這些問題的人員。
When you suspect something is ethically wrong with an API service in your specific domain, share it with other developers by opening an issue on that service’s GitHub page, Stack Overflow, or developers’ community pages. Almost all service providers have these platforms for the developers to share their issues. Service providers might say it is impossible to test and audit their tools for every single domain because their service is a general-purpose tool. But you can inform them about ethical issues you face within a specific domain for your use cases. By providing public information you can also help other developers who might want to use that service!
如果您懷疑特定域中的API服務在道德上有問題,請在該服務的GitHub頁面,Stack Overflow或開發者社區頁面上打開一個問題,與其他開發者共享它。 幾乎所有服務提供商都擁有這些平臺,供開發人員共享他們的問題。 服務提供商可能會說不可能針對每個域測試和審核他們的工具,因為他們的服務是通用工具。 但是您可以告知他們有關用例的特定領域內您面臨的道德問題。 通過提供公共信息,您還可以幫助其他想要使用該服務的開發人員!
If you integrate third party ML APIs in your product, mention it in your product’s privacy policy and terms of services. Don’t minimize it to a sentence saying “we use other parties’ services.” Include information about those third-party services — in this case, an ML service provider. Be transparent about how users’ data are handled because of that specific third-party relationship.
如果您在產品中集成了第三方ML API,請在產品的隱私權政策和服務條款中提及。 不要將其最小化為“我們使用其他方的服務”。 包括有關這些第三方服務的信息,在本例中為ML服務提供商。 由于特定的第三方關系,因此對于用戶數據的處理方式應保持透明。
到機器學習API服務提供商 (To Machine Learning APIs Service providers)
The focus of this post was not on service providers but on third-party developers. However, to highlight some of the service providers’ responsibilities when it comes to informing developers, I would say:
這篇文章的重點不是服務提供商,而是第三方開發人員。 但是,要強調通知服務開發人員時服務提供商的某些職責,我要說:
Document and be transparent! Don’t bury fairness criteria in a 500-page document. Use more visible and friendly user Interfaces to guide developers to read about fairness and privacy criteria before signing up to use your service.
文檔并保持透明! 不要將公平標準掩埋在500頁的文件中。 使用更直觀,更友好的用戶界面來指導開發人員在注冊使用您的服務之前先了解公平性和隱私權標準 。
Add issues related to the fairness, security, and privacy of your own API services in your developers’ portals and community pages. Let developers discuss these issues within those portals (e.g. creating a dedicated Slack channel within developers’ Workspace) and encourage developers to share their experience dealing with fairness, privacy, and security while using your services (IBM 360 Slack Channel, Salesforce UI warnings are good examples). Don’t only showcase “successful” uses and positive testimonial on your marketplace page!
在開發人員的門戶和社區頁面中添加與您自己的API服務的公平性,安全性和隱私性有關的問題。 讓開發人員在這些門戶中討論這些問題(例如,在開發人員的工作區中創建專用的Slack通道),并鼓勵開發人員在使用服務時分享他們在處理公平性,隱私和安全性方面的經驗( IBM 360 Slack Channel , Salesforce UI警告是不錯的選擇)例子)。 不僅要在您的市場頁面上展示“成功”的用法和正面的評價!
Each tier of developer account (free, standard, premium) brings different levels of responsibilities for you. Develop privacy-protective practices to monitor potential misuses of your services. This paper offers some feasible solutions: Monitoring Misuse for Accountable ‘Artificial Intelligence as a Service.’
每層開發人員帳戶(免費,標準,高級)為您帶來不同級別的責任。 制定保護隱私的做法,以監視對服務的潛在濫用。 本文提供了一些可行的解決方案: 監視對負責任的“人工智能即服務 ”的濫用 。
致ML審核員和人權與技術從業人員 (To ML Auditors and Human Rights & Technology Practitioners)
We hear a lot about democratizing building blocks of digital technologies; also we hear a lot about the interoperability of digital services. These are all good. But they bring new kinds of interactions, data flows, and data ownership matters.
我們聽到了很多關于使數字技術的基礎架構民主化的信息。 我們也聽到了很多有關數字服務互操作性的信息。 這些都很好。 但是它們帶來了新型的交互,數據流和數據所有權問題。
The purpose of this blog post has been to raise awareness about the importance of these often-overlooked relationships and actors. It’s for developers to think about their responsibilities before integrating these APIs into their services. But it’s also for human rights practitioners, privacy advocates, ethical tech researchers to dissect these issues and find practical guidance to help smaller actors in our data-driven world.
這篇博客的目的是提高人們對這些經常被忽視的關系和參與者的重要性的認識。 這是讓開發人員在將這些API集成到其服務中之前考慮其職責的。 但是,對于人權從業者,隱私倡導者,道德技術研究人員來說,也要剖析這些問題并找到實用的指南,以幫助我們數據驅動的世界中的小規模參與者。
Scrutinize third-party relationships when you audit a certain product/service and try to assess potential adverse human rights impacts of it. Both groups play a role when things go wrong. Going forward, let’s pay more attention to such things as supply chain issues, and carefully examine the role and responsibilities of different actors of the digital technologies ecosystem.
當您審核某種產品/服務并嘗試評估其潛在的不利人權影響時,請仔細檢查第三方關系。 出現問題時,這兩個小組都將發揮作用。 展望未來,讓我們更加關注供應鏈問題,并仔細檢查數字技術生態系統不同參與者的角色和責任。
I work on issues at the intersection of technology and human rights. If you are a developer and have been thinking about ways to choose and use building blocks of your product more responsibly please reach out to me. I would be happy to speak with you: rpakzad@taraazresearch.org.
我致力于技術與人權的交匯處。 如果您是開發人員,并且一直在考慮更負責任地選擇和使用產品構建塊的方法,請與我聯系。 我很高興與您交談:rpakzad@taraazresearch.org。
If you are interested in tech & human rights check out Taraaz’s website and sign up for our newsletter.
如果您對技術和人權感興趣,請訪問 Taraaz的 網站并注冊我們的 新聞通訊 。
翻譯自: https://medium.com/taraaz/developers-choose-wisely-a-guide-for-responsible-use-of-machine-learning-apis-e006e4263cae
api 規則定義
總結
以上是生活随笔為你收集整理的api 规则定义_API有规则,而且功能强大的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: IDC:2022 年中国智能手机市场出货
- 下一篇: r语言模型评估:_情感分析评估:对自然语