This paper introduces a novel spatial feature for human action recognition and analysis. The positions and orientations of body joints relative to a reference point are used to build an occupancy map of the 3D space that was occupied during the action execution. The joint data is acquired with the Microsoft Kinect v2 sensor and undergoes a pose invariant normalization process to eliminate body differences between different persons. The body related occupancy map (BROM) and its 2D views are used as feature input for a random forest classifier. The approach is tested on a self-captured database of 23 human actions for game-play. On this database a classification with an F1-score of 0.84 is achieved for the front view of the BROM from the complete skeleton.