並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 9 件 / 9件

新着順 人気順

interpretabilityの検索結果1 - 9 件 / 9件

  • XAI(Explainable AI:説明可能なAI)/解釈可能性(Interpretability)とは?

    XAI(Explainable AI:説明可能なAI)/解釈可能性(Interpretability)とは?:AI・機械学習の用語辞典 用語「説明可能なAI」および「解釈性」について説明。推定結果に至るプロセスを人間が説明できるようになっている機械学習モデル(=AI本体)のこと、あるいはその技術・研究分野を指す。 連載目次 用語解説 説明可能なAI(人工知能)(Explainable AI:XAI)とは、言葉通り、予測結果や推定結果に至るプロセスが人間によって説明可能になっている機械学習のモデル(つまりAIの本体)のこと、あるいはそれに関する技術や研究分野のことを指す。ちなみに「XAI」は、米国のDARPA(Defense Advanced Research Projects Agency:国防高等研究計画局)が主導する研究プロジェクトが発端で、社会的に広く使われるようになった用語である。

      XAI(Explainable AI:説明可能なAI)/解釈可能性(Interpretability)とは?
    • Language Interpretability Tool (LIT) の紹介 - 機械学習 Memo φ(・ω・ )

      概要 Google Researchが、言語解釈ツール Language Interpretability Tool (LIT) を紹介する論文を出しました。NLPモデルが期待どおりに動作しない場合に、何が問題かを解明するために役立つツールだと記載されていて、便利そうだと思い試しに動かしてみたので、LITの簡単な紹介を記載します。 [2008.05122] The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models 概要 LITとは インストール LITの起動 インスタンスの起動 quickstart_sst_demo pretrained_lm_demo インスタンス起動用のスクリプト作成 Datasetクラス Modelクラス 公式ドキュメン

        Language Interpretability Tool (LIT) の紹介 - 機械学習 Memo φ(・ω・ )
      • GitHub - PAIR-code/lit: The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.

        The Learning Interpretability Tool (🔥LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks. LIT is built to answer questions such as: What kind of examples does m

          GitHub - PAIR-code/lit: The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.
        • GitHub - MAIF/shapash: 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

          Shapash is a Python library designed to make machine learning interpretable and comprehensible for everyone. It offers various visualizations with clear and explicit labels that are easily understood by all. With Shapash, you can generate a Webapp that simplifies the comprehension of interactions between the model's features, and allows seamless navigation between local and global explainability.

            GitHub - MAIF/shapash: 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
          • Learning Interpretability Tool

            The Learning Interpretability Tool (🔥LIT) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. The Learning Interpretability Tool (🔥LIT) is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool. Use LIT to ask and answer questions like: What kind of examples does my model perform

            • GitHub - pytorch/captum: Model interpretability and understanding for PyTorch

              You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

                GitHub - pytorch/captum: Model interpretability and understanding for PyTorch
              • GitHub - sicara/tf-explain: Interpretability Methods for tf.keras models with Tensorflow 2.x

                A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

                  GitHub - sicara/tf-explain: Interpretability Methods for tf.keras models with Tensorflow 2.x
                • Captum · Model Interpretability for PyTorch

                  import numpy as np import torch import torch.nn as nn from captum.attr import IntegratedGradients class ToyModel(nn.Module): def __init__(self): super().__init__() self.lin1 = nn.Linear(3, 3) self.relu = nn.ReLU() self.lin2 = nn.Linear(3, 2) # initialize weights and biases self.lin1.weight = nn.Parameter(torch.arange(-4.0, 5.0).view(3, 3)) self.lin1.bias = nn.Parameter(torch.zeros(1,3)) self.lin2.

                    Captum · Model Interpretability for PyTorch
                  • Interpretability in Machine Learning: An Overview

                    This essay provides a broad overview of the sub-field of machine learning interpretability. While not exhaustive, my goal is to review conceptual frameworks, existing research, and future directions. I follow the categorizations used in Lipton et al.'s Mythos of Model Interpretability, which I think is the best paper for understanding the different definitions of interpretability. We'll go over ma

                      Interpretability in Machine Learning: An Overview
                    1