スタンフォード大の講義動画で「自然言語処理論」を学ぶためのYoutube動画まとめ
スタンフォード大の講義動画で,自然言語処理論を学ぶためのYoutube動画。
データマイニングと機械学習によるテキスト分析に入門することができ,
自動テキスト要約などの手法解説も含む。
内容は英語。
- Natural Language Processing
- Professor Dan Jurafsky & Chris Manning
- Stanford University
1~10
- 2.01 - Regular Expressions
- 2.02 - Regular Expressions in Practical
- 2.03 - Word Tokenization
- 2.04 - Word Normalization and Stemming
- 2.05 - Sentence Segmentation
- 3.01 - Defining Minimum Edit Distance
- 3.02 - Computing Minimum Edit Distance
- 3.03 - Backtrace for Computing Alignments
- 3.04 - Weighted Minimum Edit Distance
- 3.05 - Minimum Edit Distance in Computational Biology
- 4.01 - Introduction to N-grams
- 4.02 - Estimating N-gram Probabilities
- 4.03 - Evaluation and Perplexity
- 4.04 - Generalization and Zeros
- 4.05 - Smoothing: Add-One
- 4.06 - Interpolation
- 4.07 - Good-Turing Smoothing
- 4.08 - Kneser-Ney Smoothing
- 5.01 - The Spelling Correction Task
- 5.02 - The Noisy Channel Model of Spelling
- 5.03 - Real-Word Spelling Correction
- 5.04 - State of the Art Systems
- 6.01 - What is Text Classification
- 6.02 - Naive Bayes
- 6.03 - Formalizing the Naive Bayes Classifier
- 6.04 - Naive Bayes: Learning
- 6.05 - Naive Bayes: Relationship to Language Modeling
- 6.06 - Multinomial Naive Bayes: A Worked Example
- 6.07 - Precision, Recall, and the F measure
- 6.08 - Text Classification: Evaluation
- 6.09 - Practical Issues in Text Classification
- 7.01 - What is Sentiment Analysis
- 7.02 - Sentiment Analysis: A baseline algorithm
- 7.03 - Sentiment Lexicons
- 7.04 - Learning Sentiment Lexicons
- 7.05 - Other Sentiment Tasks
- 8.01 - Generative vs. Discriminative Models
- 8.02 - Making features from text for discriminative
- 8.03 - Feature-Based Linear Classifiers
- 8.04 - Building a Maxent Model: The Nuts and Bolts
- 8.05 - Generative vs. Discriminative models
- 8.06 - Maximizing the Likelihood
11~20
- 11.01 - The Maximum Entropy Model Presentation
- 11.02 - Feature Overlap: Feature Interaction
- 11.03 - Conditional Maxent Models for Classification
- 11.04 - Smoothing Regularization Priors for Maxent Models
- 12.01 - An Intro to Parts of Speech and POS Tagging
- 12.02 - Some Methods and Results on Sequence Models for POS Tagging
- 13.01 - Syntactic Structure: Constituency vs Dependency
- 13.02 - Empirical, Data-Driven Approach to Parsing
- 13.03 - The Exponential Problem in Parsing
- 15.01 - CFGs and PCFGs
- 15.02 - Grammar Transforms
- 15.03 - CKY Parsing
- 15.04 - CKY Example
- 15.05 - Constituency Parser Evaluation
- 16.01 - Lexicalization of PCFGs
- 16.02 - Charniak's Model
- 16.03 - PCFG Independence Assumptions
- 16.04 - The Return of Unlexicalized PCFGs
- 16.05 - Latent Variable PCFGs
- 17.01 - Dependency Parsing Introduction
- 17.02 - Greedy Transition-Based Parsing
- 17.03 - Dependencies Encode Relational Structure
- 18.01 - Introduction to Information Retrieval
- 18.02 - Term-Document Incidence Matrices
- 18.03 - The Inverted Index
- 18.04 - Query Processing with the Inverted Index
- 18.05 - Phrase Queries and Positional Indexes
21~最後
- 21.01 - What is Question Answering
- 21.02 - Answer Types and Query Formulation
- 21.03 - Passage Retrieval and Answer Extraction
- 21.04 - Using Knowledge in QA
- 21.05 - Advanced: Answering Complex Questions