Abstract: Modern deep networks often rely on attention modules which are still at a modest level due to using either one type of channel-wise patterns or an expensive combination of two types of them.