About This Architecture
Length-adaptive hybrid architecture fuses Graph Convolutional Networks (GCN) and BERT4Rec Transformer for sequential recommendation. User sequences up to 200 items flow through shared 64-dimensional item embeddings into parallel paths: a 2-layer LightGCN encoder processing co-occurrence graphs (window=5, threshold=2) and a 2-block bidirectional Transformer with cloze masking. Adaptive fusion layer dynamically blends GCN and Transformer signals based on user history length—short sequences (≤10 items) weight GCN 70%, medium (10-30) blend equally, long (>30) favor Transformer 70%—before inner-product scoring against all items for top-K ranking. This architecture solves the cold-start versus long-tail problem by routing short-history users to graph structure and long-history users to sequential patterns, optimizing HR@K, NDCG@K, and MRR@K metrics. Fork this diagram on Diagrams.so to customize fusion thresholds, embedding dimensions, or propagation layers for your recommendation pipeline.