: UCF101: A Dataset of 101 Human Action Classes From Videos in the Wild
: Testing how well an algorithm tracks pixels between frames.
The paper is foundational for researchers training deep learning models (like 3D CNNs) to recognize human movement. Key highlights include:
: It contains 13,320 videos across 101 action categories.
: Extracting spatial-temporal features using models like I3D or C3D.
: Actions are divided into five types: Human-Object Interaction, Body-Motion Only, Human-Human Interaction, Playing Musical Instruments, and Sports. Common Use Cases
Based on the UCF101 naming convention ( v_ActionName_gXX_cYY.avi or .mp4 ), the code refers to the 60th video group within a specific action category. While the exact action depends on the subdirectory it was pulled from, the group "60" is frequently associated with actions like Playing Guitar or Playing Piano in various versions of the dataset distribution. Key Contributions of the Paper
: UCF101: A Dataset of 101 Human Action Classes From Videos in the Wild
: Testing how well an algorithm tracks pixels between frames. g60229.mp4
The paper is foundational for researchers training deep learning models (like 3D CNNs) to recognize human movement. Key highlights include: : UCF101: A Dataset of 101 Human Action
: It contains 13,320 videos across 101 action categories. : Extracting spatial-temporal features using models like I3D
: Extracting spatial-temporal features using models like I3D or C3D.
: Actions are divided into five types: Human-Object Interaction, Body-Motion Only, Human-Human Interaction, Playing Musical Instruments, and Sports. Common Use Cases
Based on the UCF101 naming convention ( v_ActionName_gXX_cYY.avi or .mp4 ), the code refers to the 60th video group within a specific action category. While the exact action depends on the subdirectory it was pulled from, the group "60" is frequently associated with actions like Playing Guitar or Playing Piano in various versions of the dataset distribution. Key Contributions of the Paper