Navigating Queue Types in MikroTik RouterOS 7

MikroTik RouterOS 7 brings a plethora of features designed to enhance network performance, manage bandwidth, and improve data packet delivery efficiency. Among these, the variety of supported queue types stands out as a crucial tool for network administrators aiming to optimize network traffic. Each queue type employs a distinct mechanism for managing data packets, offering unique advantages and potential drawbacks. This article explores the supported queue types in MikroTik RouterOS 7, delving into how each works and discussing their pros and cons.

Understanding Queue Types

Queue types in MikroTik RouterOS 7 are essentially algorithms for managing traffic flow through the router. They determine how packets are processed, prioritized, and forwarded, impacting overall network performance, latency, and throughput.

The followiing queue types are supported in MikroTik RouterOS 7:

  • bfifo - Byte First-in, First-out
  • cake - Common Applications Kept Enhanced
  • codel - Controlled Delay
  • fq_codel - Fair Queuing Controlled Delay
  • mq_pfifo - Multi-Queue Priority First-in, First-out
  • pcq - Per Connection Queue
  • pfifo - Packet First-in, First-out
  • red - Random Early Detection
  • sfq - Stochastic Fair Queueing

Let's take a closer look at each of these queue types, understanding their mechanisms, pros, and cons.


bfifo Byte First-in, First-out

Mechanism: Byte First-in, First-out (bfifo) is a straightforward and effective queue management algorithm that operates on the principle of processing packets in the order they arrive, but prioritizes them based on their byte size rather than their arrival time or content. In a bfifo queue, packets are lined up in a buffer and dispatched in a sequential manner, ensuring that the first packet to enter the queue is the first one to be transmitted, hence the name First-in, First-out (FIFO). This method does not discriminate between packet types or services, providing a fair but uncomplicated approach to traffic management. However, its simplicity also means it lacks mechanisms to prioritize critical traffic during periods of congestion, potentially impacting the performance of real-time applications such as VoIP or video streaming. Despite this, bfifo's low computational overhead makes it an attractive option for scenarios where basic queue management is required without the need for sophisticated traffic prioritization.

Pros:

  • Simplicity and predictability in packet handling.
  • Fair treatment of packets regardless of type.

Cons:

  • Lack of prioritization can lead to increased latency for critical applications during congestion.

cake Common Applications Kept Enhanced

Mechanism: The Cake queue management algorithm represents a significant advancement in handling network traffic, particularly designed to address the challenges of bufferbloat while simultaneously ensuring fairness across multiple data flows. Cake intelligently manages packet queuing and dispatching by automatically adjusting to network conditions, thereby optimizing bandwidth distribution and minimizing latency for all types of traffic. It incorporates features such as bandwidth shaping, flow isolation, and prioritization of small packets, which are crucial for latency-sensitive applications like VoIP and gaming. Unlike simpler algorithms, cake dynamically categorizes traffic into different flows, applying fair queuing and active queue management to each, ensuring that no single flow can monopolize the available bandwidth. This sophisticated approach allows cake to provide stable, low-latency connections across a wide range of network scenarios.

Pros:

  • Automatically adjusts to changing network conditions.
  • Provides excellent latency performance under load.
  • Fairness for different types of traffic.

Cons:

  • More complex to configure correctly.
  • Might require more processing power, impacting router performance.

codel Controlled Delay

Mechanism: The codel queue management algorithm is an approach designed to minimize network latency, specifically targeting the issue of bufferbloat. Codel operates on a simple yet effective principle: it monitors the time packets spend in the queue and begins dropping packets (thus signaling congestion) when their queue time exceeds a predefined threshold. This active queue management technique encourages TCP flows to reduce their transmission rate in response to dropping, thereby preventing the queue from becoming overloaded and reducing latency. Unlike traditional queue management algorithms that rely on packet loss or queue length thresholds, codel focuses on packet sojourn time, making it highly effective in maintaining low latency across a variety of network conditions without the need for complex configuration.

Pros:

  • Reduces latency significantly, improving performance for real-time applications.
  • Simple configuration with minimal parameters.

Cons:

  • Packet drops might affect throughput under certain conditions.
  • May not be ideal for networks with highly variable bandwidth.

fq_codel Fair Queuing Controlled Delay

Mechanism: Fq_codel combines the innovative approaches of Fair Queuing (FQ) and the Controlled Delay (codel) algorithm to create a highly efficient queue management system. Fq_codel intelligently segments network traffic into separate flows, applying codel's active queue management principles to each one. This dual approach allows fq_codel to minimize latency by actively managing queue lengths while ensuring fair bandwidth distribution among all active data flows. By preventing any single flow from dominating the bandwidth, fq_codel effectively combats bufferbloat, maintaining low latency and jitter across the network. This makes it especially advantageous for mixed-traffic environments where real-time applications need to coexist with bulk data transfers.

Pros:

  • Combats bufferbloat while ensuring fairness among users.
  • Maintains low latency across all traffic types.

Cons:

  • Complexity in understanding and configuring the combined behaviors.
  • Requires monitoring to optimize performance.

mq_pfifo Multi-Queue Priority First-in, First-out

Mechanism: Mq_pfifo extends the simplicity of the traditional PFIFO queueing mechanism by introducing multiple priority levels within a singular queueing structure. This algorithm allows packets to be classified into different queues based on their assigned priority, ensuring that higher-priority traffic is processed and transmitted ahead of lower-priority packets. By doing so, mq_pfifo facilitates a more nuanced control over traffic management, enabling network administrators to ensure that critical applications receive the bandwidth and low-latency treatment they require without being hindered by bulk or less time-sensitive traffic. This prioritization is particularly beneficial in environments where diverse applications with varying performance requirements coexist, such as in enterprise networks or WISPs serving a broad customer base. While mq_pfifo maintains the straightforward approach of PFIFO, its ability to segregate traffic into priority tiers offers a significant advantage in optimizing network performance and enhancing user experience for priority services. However, it requires careful configuration to prevent high-priority traffic from completely starving lower-priority packets, ensuring a balanced and fair network operation.

Pros:

  • Allows for explicit prioritization of critical traffic.
  • Simple and effective for managing different service levels.

Cons:

  • Can lead to lower priority traffic being significantly delayed or dropped in congested networks.
  • Managing multiple queues can become complex.

pcq Per Connection Queue

Mechanism: PCQ is a sophisticated queue management technique offered by MikroTik, designed to optimize the distribution of network bandwidth among multiple active users and connections. Unlike traditional queueing mechanisms that manage traffic in bulk, PCQ uniquely identifies and segregates traffic into dynamically created sub-queues based on source or destination addresses, essentially providing a fair bandwidth allocation to each active flow.

The brilliance of PCQ lies in its ability to balance network load efficiently. By allocating each connection its queue, PCQ ensures that no single user or service disproportionately consumes bandwidth, preventing network congestion and maintaining equitable access to network resources. This is particularly advantageous in scenarios with high user density, such as public Wi-Fi networks, ISPs, and large corporate networks, where fair usage policies are crucial for maintaining service quality across the board.

PCQ is highly configurable, allowing administrators to set both the rate of each queue and the total available bandwidth for the aggregated queues. This flexibility enables precise control over how bandwidth is allocated, ensuring that critical applications receive the necessary resources while optimizing the overall network performance.

Pros:

  • Ensures fair bandwidth distribution, preventing individual users or services from monopolizing network resources.
  • Enhances network efficiency and user satisfaction by dynamically adjusting to varying traffic loads.
  • Simplifies bandwidth management for network administrators through automatic queue adjustments.

Cons:

  • Requires initial configuration and tuning to align with specific network needs and traffic patterns.
  • In networks with predominantly high-priority or latency-sensitive traffic, PCQ's equal distribution model may necessitate additional rules for prioritization.

In practice, implementing PCQ in a MikroTik router could involve setting up simple queues for different user groups or services and defining the PCQ parameters to manage bandwidth distribution effectively. For example, ensuring VoIP traffic has enough bandwidth while equally distributing the remaining bandwidth for general internet usage among users. Through thoughtful configuration, PCQ can significantly enhance network performance, making it a powerful tool in the network administrator's arsenal for achieving an optimal balance between fairness and efficiency in bandwidth allocation.


pfifo Packet First-in, First-out

Mechanism: pfifo is a straightforward queue management algorithm that processes and forwards packets strictly in the order they arrive, based on the packet count. Functioning on the fundamental FIFO principle, pfifo does not differentiate between packet types, sizes, or priorities, ensuring a simple, unbiased approach to packet handling. This mechanism entails that the first packet entering the queue is the first to be transmitted, promoting fairness in packet treatment but without any specific measures to manage network congestion or prioritize critical traffic. While pfifo's simplicity makes it appealing for scenarios where minimal queue management overhead is desired, its lack of prioritization capabilities can be a limitation in networks that require differential treatment for various types of traffic, such as prioritizing VoIP over bulk data transfer. Nonetheless, for small networks or applications where advanced traffic management is not critical, pfifo offers an efficient, low-complexity solution for maintaining a consistent flow of data packets.

Pros:

  • Simple and straightforward packet processing.
  • Minimal processing overhead.

Cons:

  • Like bfifo, lacks traffic prioritization capabilities.
  • Not suitable for networks requiring traffic differentiation.

red Random Early Detection

Mechanism: RED is an advanced queue management algorithm designed to preemptively mitigate network congestion before it becomes problematic. RED operates by monitoring the average queue length and, based on this, it randomly drops incoming packets early when it anticipates the queue reaching its maximum capacity. This early packet drop serves as a signal to the senders to reduce their transmission rate, aiming to avoid the onset of congestion. Unlike simpler queue management mechanisms that only react after congestion has occurred, RED's proactive approach helps maintain smoother traffic flow and stabilizes queue lengths, which can enhance overall network performance. However, configuring RED requires careful tuning of its parameters, such as minimum and maximum threshold levels for average queue length, to effectively balance between throughput and delay. While highly effective in diverse network environments, particularly in TCP/IP networks, the success of RED largely depends on its ability to accurately predict and preempt congestion, making it a sophisticated tool for network administrators seeking to optimize data traffic and minimize latency.

Pros:

  • Helps in avoiding sudden traffic congestion.
  • Can improve throughput by preventing queue overflows.

Cons:

  • Incorrect configuration can lead to increased packet loss.
  • Balancing between throughput and delay can be challenging.

sfq Stochastic Fair Queueing

Mechanism: SFQ is a queue management algorithm that aims to distribute available bandwidth evenly across all active flows, regardless of the number of packets each flow sends. By employing a hashing technique to sort incoming packets into separate queues based on their flow (identified by source and destination addresses, ports, and protocol), SFQ ensures that each flow gets a fair chance to transmit its data, effectively preventing any single flow from dominating the bandwidth. This is particularly useful in environments where bandwidth needs to be allocated equitably among users or applications, such as in shared internet access scenarios. The "stochastic" part of SFQ comes from the way it periodically scrambles the hashing parameters to prevent flows from being permanently disadvantaged due to their hash value. This dynamic approach helps maintain fairness over time, even as flows start and stop or their data transmission patterns change. While SFQ does not prioritize traffic based on content or service type, its ability to provide equal access to network resources makes it an invaluable tool for managing congestion and ensuring a smooth user experience for all network participants. However, its fairness can sometimes be a drawback for networks that need to prioritize certain types of traffic, such as VoIP or streaming media, requiring additional configurations to meet these needs effectively.

Pros:

  • Promotes fairness and prevents any single flow from dominating bandwidth.
  • Simple to implement with minimal configuration.

Cons:

  • May not adequately prioritize time-sensitive traffic.
  • Random distribution can lead to inefficiencies in traffic handling.

What a mouthful, right?

The array of queue types supported by MikroTik RouterOS 7 offers network administrators a powerful toolkit for tailoring traffic management to specific network needs. Whether the goal is to minimize latency, manage congestion, or ensure fair bandwidth distribution, understanding the mechanisms, pros, and cons of each queue type is crucial. By carefully selecting and configuring the appropriate queue type, administrators can significantly enhance network performance, providing a smoother, more responsive experience for end-users.

Was this page helpful?