Wednesday, August 14, 2013

Data Queue Vs MailBox: Send data

① Difference between sending data over data queue and mail box

Data Queue has fixed size data storage. So, count also is maintained. And, if there is waiting task at receive side, wake up procedure needs to be executed. If the queue is full, the task calling this function will go waiting. So, waiting procedure (priority based) needs to be executed. If both cases are not there, the data needs to be just put in the queue.

Consider one more special case "Forced Send" in Data Queue. Forced send neither wait for receive task to pass the data (synchronous message passing) nor wait for the data queue to get empty space. If queue is full, it will release one element at the head as if the data is read and data is added at the tail. This function can be used at Task context, Timer handler contexts and Interrupt handlers where snd_dtq can be used only at task contexts. See the implementation at ④. Between, ifsnd_dtq cannot be used at Task context.
 
In mail box, if there is no receive waiting task, the message will be queued in priority order. Otherwise, one wake up procedure to pass this message. Receive task will be waiting in priority order. Both Receive task and message have priority.
 
② ITRON implementation of sending data over data queue

ER snd_dtq(ID dtqid, VP_INT data)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        If (count of valid data in queue == 0){
                /* then some tasks may be waiting. can pass directly */
                Check head of receive wait queue
                If there is valid task id {
                        Get the pointer to store data from task's context and store the data;
                        if task is waiting in Timeout
                                remove from timer queue
                        Set task status as Ready
                        if the task is NOT suspended {
                                Change the task to the Ready Queue of priority
                                If (the task priority is higher than the current){
                                        call scheduler
                                        return error code from scheduler
                                } else {
                                        Exit critical section and return error code from it (E_OK)
                                        (return E_OK)
                                }
                        } else {
                                Just delete the task from the receive wait queue
                                Exit critical section and return the error code from it (E_OK)
                                (return E_OK)
                        }
                }
        }
        If (count of valid data in queue reached maximum count) {
                If (polling mode)
                        Exit critical section and return E_TMOUT;
                If (this function is called from interrupt context) or
                If (called from task independent state ex: timer handlers) or
                If (called from dispatch disabled state)
                        Return E_CTX;
                If (Timeout value is specified) {
                        Add task into timer queue;
                        Set TCB flags that 'Waiting for send wait queue, Check for Timeout'
                } else (Wait forever) {
                        Set TCB flags that 'Waiting forever for send wait queue'
                }
                Set data queue ID in TCB
                Store the data in context
                Select Queue according to queueing order
                If (Tasks should be granted in FIFO order)
                        Select send wait queue->Queue[0] as Queue
                Else (Should be granted in Priority order)
                        Select send wait queue->Queue[Current Tasks Priority]
                Change the task from ReadyQueue to the Data queue's send wait Queue
                Schedule the tasks, since this task is going to sleep
                (Once the task is released, execution will return here)
                Return the value set by scheduler(Scheduler directly set return value before return here)
        } else {
                Store the data at the tail pointer
                Increment count for valid data
                Update the tail pointer
                if the tail pointer reached end of data queue
                        Wrap it to start of data queue
        }
        Exit critical section and return the error code from it (E_OK)
        return E_OK;
}

③ ITRON implementation of sending data over mail box

ER snd_mbx(ID mbxid, T_MSG *pk_msg)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        /* Check some tasks may be waiting. can pass message directly */
        for (each receive wait queue of all priority till tail is reached){
                Check head of receive wait queue
                If there is valid task id {
                        Get the pointer to store message from task's context and store the message pointer;
                        if task is waiting in Timeout
                                remove from timer queue
                        Set task status ad Ready
                        if the task is NOT suspended {
                                Change the task to the Ready Queue of priority
                                If (the task priority is higher than the current){
                                        call scheduler
                                        return error code from scheduler
                                } else {
                                        Exit critical section and return error code from it (E_OK)
                                        (return E_OK)
                                }
                        } else {
                                Just delete the task from the receive wait queue
                                Exit critical section and return the error code from it (E_OK)
                                (return E_OK)
                        }
                }
        }
        Take message queue head;
        if (message priority level is more than 1)
                Get the message queue head corresponding to that priority;
        /* Add the message to the tail of the list pointed by the queue head */
        if queue head points to NULL
                queue head = message;
        else
                queue tail->next = message;
        Update the queue tail too to point message;
        message -> next = NULL;
        Exit critical section and return the error code from it (E_OK)
        return E_OK;
}

④ ITRON implementation to forcefully send data in Data Queue

ER fsnd_dtq(ID dtqid, VP_INT data)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        Check receive queue head;
        if (there is no waiting receive tasks in the queue)
        {
                take the task at the head of the queue;
                assign the data to this waiting task through context;
                if task is waiting in Timeout
                        remove from timer queue
                Set task status as Ready
                if the task is NOT suspended {
                        Change the task to the Ready Queue of priority
                        If (the task priority is higher than the current){
                                call scheduler
                                return error code from scheduler
                        } else {
                                Exit critical section and return error code from it (E_OK)
                                (return E_OK)
                        }
                } else {
                        Just delete the task from the receive wait queue
                        Exit critical section and return the error code from it (E_OK)
                        (return E_OK)
                }
        }
        if (count of data in queue == size of data queue (queue is full))
        {
                Dummy read (scrap) data at the head of queue (advance read pointer);
                /* to free space for this new data; count won't change and this is just going to occupy */
        }
        else {
                increment the data count;
        }
        add the data at the write pointer (tail of queue);
        update the write pointer;
        Exit critical section and return the error code from it (E_OK)
        return E_OK;
}

See you in other RTOS posts..

Tuesday, August 13, 2013

Data Queue Vs Mailbox: Creation

①Difference between Data queue and Mail box

In Data queue the data size is just one word and it is copied into the queue maintained by the kernel. Again when the receiver receives the data, it is copied to the receiver and removed from the queue. And, the queue size is limited (specified by the user). Since the queue size is limited, when there is no space, the sender will wait till one slot getting emptied. And, when there is no data, the receive task also kept waiting. So, two more queues are needed for the waiting tasks. But, receive task waits only in FIFO order. Sending task can be selected to wait either in Priority or FIFO order. When the data queue size is zero, the sending task will directly pass the data to the receive task. Which one coming earlier will wait for the other one. This is called 'synchronous message passing'.

But, in mailbox, the data is not copied, but the pointer to the data is passed to the receiver. And, the data buffer should include the space for the header to be maintained by the kernel. The number of entries are unlimited, since the kernel just ties the data buffers together into a list. So, there is no waiting on sender side. But, in mailbox, each message has a priority. So, when there is more than one priority level, the messages are delivered in priority order. And, the receive tasks also can be queued in priority order.
 
In data queue, the data does not have any priority.
 
Implementation wise, the data queue takes arguments of ① Number of data elements (data queue size) ② Memory area for the Data queue (if specified NULL, it will be allocated by kernel automatically) ③ Whether to order sending tasks in FIFO or Priority based order
 
Mailbox takes arguments of ① Number of priority levels of messages ② Memory for area for implementing the priority queues (if specified NULL, it will be allocated by kernel automatically) ③ Attribute to specify whether to order waiting receive tasks in FIFO or Priority based order

② ITRON implementation of creating data queue

ER cre_dtq(ID dtqid, T_CDTQ *pk_cdtq)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        Allocate memory for the data queue and send task queue;
        (for data queue, each element is sizeof(void pointer))
        Initialize the data queue structure;(data queue structure is as follows)

        In this function, start, head, tail parameters are initialized to start address
        end parameter is initialized to end address
        Counter for number of valid data elements is initialized to zero
        Initialize the receiving task waiting queue structure;(Only FIFO order. so,)
                Initialize queue as follows:
                ┏━━━━━┯━━━━━━┓
                ┃Queue   │Queue     ┃
                ┃[0]     │Tail       ┃
                ┗━━━━━┷━━━━━━┛
        Initialize the sending task waiting queue structure;
        If (tasks are queued in FIFO order)
                Initialize queue as follows:
                ┏━━━━━┯━━━━━━┓
                ┃Queue   │Queue     ┃
                ┃[0]     │Tail       ┃
                ┗━━━━━┷━━━━━━┛
        else (tasks are queued as priority based)
                Initialize as follows:
                ┏━━━━━━━━┯━━┯━━━━┯━━━━━━┓
                ┃Priority0      │1   │Max   │Queue    ┃
                ┃            │    │Pri    │Tail       ┃
                ┗━━━━━━━━┷━━┷━━━━┷━━━━━━┛
        Exit critical section (Interrupt Restore)
        return E_OK;
}

② ITRON implementation of creating mail box

ER cre_mbx(ID mbxid, const T_CMBX *pk_cmbx)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments;
        Enter critical section (Interrupt Disable);
        If maximum message priority has NOT been specified, keep it as 1;
        Allocate memory for the message priority queue and send task queue;
        Remember the points for message queue, maximum priority, etc;
        Initialize the message priority queue; Message queue is managed as follows:

        Initialize the task waiting queue structure;
        If (tasks are queued in FIFO order)
                Initialize queue as follows:
                ┏━━━━━┯━━━━━━┓
                ┃Queue   │Queue     ┃
                ┃[0]     │Tail       ┃
                ┗━━━━━┷━━━━━━┛
        else (tasks are queued as priority based)
                Initialize as follows:
                ┏━━━━━━━━┯━━┯━━━━┯━━━━━━┓
                ┃Priority0      │1   │Max   │Queue    ┃
                ┃            │    │Pri    │Tail       ┃
                ┗━━━━━━━━┷━━┷━━━━┷━━━━━━┛
        Exit critical section (Interrupt Restore)
        return E_OK;
}
 

Variable Vs Fixed memory pool: Deletion

① Delete Variable and Fixed Memory Pool

Similar to creation of memory pool, deletion cannot be done from interrupt context. Deletion can be done from task contexts which has some priority value. All the waiting tasks in the queue are just waken up and E_DLT is set as return error code for them.

Deletion of fixed memory pool is same as variable memory pool.

② uITRON implementation to delete variable memory pool

ER del_mpl(ID mplid)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        For each task waiting on the memory pool's queue {
                Set E_DLT at the error code register of task's context
                If task is waiting for timeout {
                        Delete task from timer queue
                }
                Set task's status as Ready
                If task is not in suspended state {
                        Change the task from memory pool's queue to ready queue
                        If executing task's priority is lower than the waiting,
                            set to execute scheduler when quitting this function
                }
                else {
                        Delete the task from the semaphore queue
                }
        }
        Release the memory allocated for the memory pool
        Release resources allocated when creating memory pool(Release memory pool's queue)
        If scheduler has to be executed {
                Call scheduler
                Return the error code in the context's error code register
        }
        Exit critical section (Interrupt Restore)
        return E_OK;
}

③ uITRON implementation to delete fixed memory pool

ER del_mpf(ID mpfid)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        If (head pointer points to NULL == 0) { /* no memory. some task may be waiting */
                For each task waiting on the memory pool's queue {
                        Set E_DLT at the error code register of task's context
                        If task is waiting for timeout {
                                Delete task from timer queue
                        }
                        Set task's status as Ready
                        If task is not in suspended state {
                                Change the task from memory pool's queue to ready queue
                                If executing task's priority is lower than the waiting,
                                    set to execute scheduler when quitting this function
                        }
                        else {
                                Delete the task from the semaphore queue
                        }
                }
        }
        Release the memory allocated for the memory pool
        Release resources allocated when creating memory pool(Release memory pool's queue)
        If scheduler has to be executed {
                Call scheduler
                Return the error code in the context's error code register
        }
        Exit critical section (Interrupt Restore)
        return E_OK;
}

Please find other function details under RTOS label.

Variable Vs Fixed memory pool: Refer status

① Difference between variable size memory pool and fixed size memory pool

Parsing through the list is required in case of variable memory pool. Otherwise, both are almost same.

② ITRON implementation of ref_mpl()

ER ref_mpl(ID mplid, T_RMPL *pk_rmpl)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        List is parsed through and the sizes are accumulated and
            maximum of single block size also kept recorded;
        Both are stored in the structure;
        Parse through the queue to get ID number of task at the head
        if valid task found {
                Store ID number in the structure
                Exit critical section (Interrupt Restore)
                return E_OK;
        }
        Store the task id as TSK_NONE;
        Exit critical section (Interrupt Restore)
        return E_OK;
}

③ ITRON implementation of ref_mpf()

ER ref_mpf(ID mpfid, T_RMPF *pk_rmpf)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        Store the counter for number of free blocks in the structure;
        If (list head points to NULL) {
                parse through the queue to get ID number of task at the head
                if valid task found {
                        Store ID number in the structure
                        Exit critical section (Interrupt Restore)
                        return E_OK;
                }
        }
        Store the task id as TSK_NONE;
        Exit critical section (Interrupt Restore)
        return E_OK;
}

Variable Vs Fixed memory pool: Releasing memory block

① Difference between variable size memory pool and fixed size memory pool

In variable memory pool, the released memory block is first merged with the pool and then the waiting tasks are searched for. And, the memory block is inserted in the list exactly where it is acquired. In other words, the memory block is inserted in the order of how physically they are located. So, list is parsed through till the exact location is found.
 
But, for fixed memory pool, before adding the released memory block to the list, the waiting tasks are searched for. If there is a waiting task, then the block is assigned to the task. And, the memory block is always added at the head of the list. So, the fixed memory blocks are not necessarily allocated in the order of how physically they are located.

② ITRON implementation of rel_mpl()

ER rel_mpl(ID mplid, VP blk)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        /* Validity checks */
        If (address is not aligned to header size) OR
            (input pointer is out of memory pool range)
                return E_PAR;
        Retrieve back to the start address of block by (input pointer - 2 words)
        Insert the memory block in the list at the exact position from where it is taken
        /* One block is released. Check does it satisfy the first waiting task */
        Traverse each queue from starting of memory pool queue till queue tail is reached {
                Till (valid task exists at head of the queue) {
                        Get requested block size from task's context
                        As if called get_blk(), add header size to this requested size and
                        Try to allocate the block
                        If (block is not allocated)
                                goto mpl_ext;
                        /* block is allocated */
                        Store requested size + header size at first word of the block
                        Advance the acquired block's address by two words and
                            store it in the return pointer that is got from task's context;
                        if (task is waiting in Timeout) {
                                Remove from timer queue
                        }
                        Set task status ad Ready
                        if (the task is NOT suspended) {
                                If (the task priority is higher than the current)
                                        Set to call scheduler when quitting this function
                                Change the task to the Ready Queue of corresponding priority
                        } else {
                                Just delete the task from the memory pool queue
                        }
                }
        }
        if (set to call scheduler) {
                Call scheduler
                Return error code that is set in scheduler
        }
        Otherwise, exit critical section and return error code
}

Insert operation ()
{
        Take the size (user requested size + size of header) from the first word of block
        Validity check the size if less than header size or end of block is out of memory pool range
        Round up the size to multiple of Basic unit size(Header size)
        Assume the head pointer as an empty block of size zero
        /* keep two consecutive blocks as previous and next nodes and check for
           insertion of the input block */
        Start with head pointer as previous node;
        Keep the size of previous node as zero;
        Keep the node pointed by head as next node;
        Till the node is inserted {
                if (next node == NULL OR next node start address > input block's start) {
                        Validate if input block start + size do not overlap next node;
                        /* Check if prev node is adjacent to input block */
                        if (input block start == prev node + size of prev node) {
                                Add input block size to size of prev node;
                                /* input block no more exists. it is to be combined with prev */
                        } else {
                                /* At front side, this block will remain separate, So, */
                                /* Update the prev node to point this input node; */
                                prev node->next = input block;
                                /* Yet to check if next node can be combined with this */
                                Make this input node as prev node
                                Make this node size as size of prev node
                        }
                        if (next node is NULL) {
                                /* no second half. So, update inserted node's fields */
                                Update prev node-> next = NULL;
                                Update prev node-> size = size of prev node;
                        } else if (next node start addr == prev node addr + size of prev node) {
                                /* next node needs to be combined */
                                prev node->next = next node->next;
                                prev node->size = size of prev node + size of next node;
                        } else { /* At back side, this block will remain separate */
                                prev node->next = next node;
                                prev node->size = size of prev node;
                        }
                        /* Inserted. So */
                        return;
                }
                prev node = next node;
                next node = next node->next;
                size of prev node = prev node->size;
                Validate prev node does not overlap with input block
        }
        return;
}

② ITRON implementation of rel_mpf()

ER rel_mpf(ID mpfid, VP blf)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        /* Validate the pointer to the block argument */
        If the block is not aligned to the CPU word OR
        if it is out of range of memory pool
                return E_PAR;
        If (head pointer is NULL) {
        /* memory pool is empty. There should be some tasks waiting for memory */
        /* awake one task at the head of the queue and assign the memory block */
                Till (end of queue (tail) is reached) {
                        if (valid task found sleeping) {
                                Get the return pointer from task's context;
                                Assign the block address to the return pointer;
                                if (task is waiting in Timeout) {
                                         Remove from timer queue
                                }
                                 Set task status ad Ready
                                if (the task is NOT suspended) {
                                        Change the task to the Ready Queue of corresponding priority
                                        If (the task priority is higher than the current){
                                                call scheduler
                                                return error code from scheduler
                                        } else {
                                                Exit critical section and return the error code from it
                                        }
                                } else {
                                        Just delete the task from the memory pool queue
                                        Exit critical section and return the error code from it
                                }
                        }
                }
        }
        /* Add the memory block at the head of the list */
        Assign the head pointer to the first word of the block
        head pointer = block;
        Increment the counter for free blocks
        Exit critical section
        return E_OK;
}
Advancing towards another milestone! Visit other posts under ROTS label.

Monday, August 12, 2013

Variable Vs Fixed size memory pool: Allocation

① Difference between variable size memory pool and fixed size memory pool

Fixed memory pool takes very less processing compared to the variable memory pool.

In variable memory pool, the memory is fragmented into small pieces. So, finally you will end up with fragmentation problem. Also, for each fragment, additional memory of two words (size of header) is getting wasted for management purposes. (Note that in implementation, the return pointer is two words advanced of internal allocated block start address.)
 
So, it is better to try to use fixed size memory pool as can as possible. And, the variable memory pool may increase the interrupt latency when using inside interrupt service routines.

② ITRON implementation of get_mpl()

ER get_mpl(ID mplid, SIZE blksz, VP p_blk)
{
        Check context. If called from ISR,
                return E_CTX;
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        User requested size is rounded up to multiple of Basic unit size(Header size)
        Calculate Internal_required_size( = Rounded up value + Header size)
        current_block = head_pointer->next;
        for (each free memory block from top of the list) {
                /* till the next pointer is NULL */
                /* keep pointer to previous and next nodes */
                if (current_block == NULL)
                        goto No_Block;
                compare the current block size with internal_required_size
                if (current block size >= internal_required_size) {
                        acquired_block = current block;
                        break;
                }
                current block = current block's next pointer;
        }
        if (acquired block size == internal_required_size) {
                Just remove the acquired block from the list
                /* previous node's next = acquired block's next */
        } else {
                /* acquired block is cut into two pieces. new_block is born */
                new_block start address = acquired block ptr + internal_required_size;
                the previous node's next = new_block start address
                new_block's next = acquired block's next pointer
                new_block's size = acquired block's size - internal_required size;
        }
No_Block:
        if (no block available) {
                If (polling mode)
                        Exit critical section and return E_TMOUT;
                If (called from task independent state ex: timer handlers) or
                If (called from dispatch disabled state)
                        Return E_CTX;
                If (Timeout value is specified) {
                        Add task into timer queue;
                        Set TCB flags that 'Waiting for memory block, Check for Timeout'
                } else (Wait forever) {
                        Set TCB flags that 'Waiting forever for memory block'
                }
                Set memory pool ID in TCB
                Also save other parameters requested block size, pointer to return
                        memory block pointer inside context
                Select Queue according to queueing order
                If (Tasks should be granted in FIFO order)
                        Select Flag->Queue[0] as Queue
                Else (Should be granted in Priority order)
                        Select Memorypool->Queue[Current Tasks Priority]
                Change the task from ReadyQueue to the Memory pool Queue
                Schedule the tasks, since this task is going to sleep
                (Once the task is released, execution will return here)
                Return the value set by scheduler(Scheduler directly set return value before return here)
        }
        /* keep the user requested size */
        allocated block's first word(next pointer) = user requested size + header size;
        Advance the acquired block's address by two words and store it in the return pointer;
        Exit critical secion
        Return E_OK;
}

③ ITRON implementation of get_mpf()

ER get_mpf(ID mpfid, VP p_blf)
{
        Sanity check on arguments
        Enter critical section (Interrupt Disable)
        Check head pointer of the memory pool
        if (head points to NULL (no block available)) {
                If (polling mode)
                        Exit critical section and return E_TMOUT;
                If (called from task independent state ex: timer handlers) or
                If (called from dispatch disabled state)
                        Return E_CTX;
                If (Timeout value is specified) {
                        Add task into timer queue;
                        Set TCB flags that 'Waiting for fixed memory block, Check for Timeout'
                } else (Wait forever) {
                        Set TCB flags that 'Waiting forever for fixed memory block'
                }
                Set memory pool ID in TCB
                Also save the parameter 'pointer to return memory block pointer' inside context
                Select Queue according to queueing order
                If (Tasks should be granted in FIFO order)
                        Select Flag->Queue[0] as Queue
                Else (Should be granted in Priority order)
                        Select Memorypool->Queue[Current Tasks Priority]
                Change the task from ReadyQueue to the Memory pool Queue
                Schedule the tasks, since this task is going to sleep
                (Once the task is released, execution will return here)
                Return the value set by scheduler(Scheduler directly set return value before return here)
        }
        /* remove the block at the head from the list */
        pointer to the block = pointer of the block pointed by head
        head = pointer to the next block
        /* pointer to the next block is stored temporarily at the first word of each block */
        Reduce the counter for number of free blocks
        Exit critical secion
        Return E_OK;
}

Getting into reality of Real Time OS! Visit other posts under RTOS lable!!

Saturday, August 03, 2013

Intel inside me

In my career, for the first time, I get chance in working with Intel architecure. So, I would like to expand my domain knowledge in x86 architecture by cotinuous learning. I have put some of my basic understanding on the platform I started working..