0% found this document useful (0 votes)
6 views

AOS Experiment-2 and 3

The document outlines two experiments focused on Inter-Process Communication (IPC) in advanced operating systems, specifically using message queues and shared memory. It includes C programs demonstrating IPC mechanisms, their kernel implications, and micro-architectural considerations such as CPU cache behavior and context switching. Additionally, it highlights the importance of synchronization and proper cleanup to prevent memory leaks and ensure secure communication.

Uploaded by

Nikita Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

AOS Experiment-2 and 3

The document outlines two experiments focused on Inter-Process Communication (IPC) in advanced operating systems, specifically using message queues and shared memory. It includes C programs demonstrating IPC mechanisms, their kernel implications, and micro-architectural considerations such as CPU cache behavior and context switching. Additionally, it highlights the importance of synchronization and proper cleanup to prevent memory leaks and ensure secure communication.

Uploaded by

Nikita Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Advanced Operating Systems Lab (BCSP-801)

Topics for Programs:


1. Getting Started with Kernel Tracing - I/O
2. Kernel Implications of IPC
3. Micro-Architectural Implications of IPC
4. The TCP State Machine
5. TCP Latency and Bandwidth

Experiment-02: Write a C Program for Kernel Implications of IPC

Inter-Process Communication (IPC) allows processes to communicate and share data in the
kernel. There are multiple ways to implement IPC in the Linux kernel, including pipes, message
queues, shared memory, and semaphores. Below is a C program demonstrating IPC using
message queues and explaining kernel-level implications.

C Program: Kernel Implications of IPC using Message Queues

This program consists of two parts:

1. Sender Process: Sends a message to the message queue.


2. Receiver Process: Reads the message from the queue.

Code

#include <stdio.h>
#include <stdlib.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <string.h>
#include <unistd.h>

#define MESSAGE_QUEUE_KEY 1234

// Structure for message queue


struct msg_buffer {
long msg_type;
char msg_text[100];
};
int main() {
struct msg_buffer message;
int msgid;

// Creating a message queue


msgid = msgget(MESSAGE_QUEUE_KEY, 0666 | IPC_CREAT);
if (msgid == -1) {
perror("msgget failed");
exit(EXIT_FAILURE);
}

printf("Choose mode:\n1. Send Message\n2. Receive Message\n");


int choice;
scanf("%d", &choice);
getchar(); // Consume newline character

if (choice == 1) {
// Sender process
message.msg_type = 1;
printf("Enter message to send: ");
fgets(message.msg_text, sizeof(message.msg_text), stdin);
message.msg_text[strcspn(message.msg_text, "\n")] = '\0'; // Remove newline

if (msgsnd(msgid, &message, sizeof(message.msg_text), 0) == -1) {


perror("msgsnd failed");
exit(EXIT_FAILURE);
}
printf("Message sent successfully!\n");

} else if (choice == 2) {
// Receiver process
if (msgrcv(msgid, &message, sizeof(message.msg_text), 1, 0) == -1) {
perror("msgrcv failed");
exit(EXIT_FAILURE);
}
printf("Received Message: %s\n", message.msg_text);

// Delete the message queue after receiving the message


if (msgctl(msgid, IPC_RMID, NULL) == -1) {
perror("msgctl failed");
exit(EXIT_FAILURE);
}
printf("Message queue deleted successfully.\n");
} else {
printf("Invalid choice!\n");
}

return 0;
}

Explanation

1. Message Queue Creation

msgid = msgget(MESSAGE_QUEUE_KEY, 0666 | IPC_CREAT);

 msgget() creates a message queue with the given key (1234).


 0666 → Permissions (read/write access).
 IPC_CREAT ensures the queue is created if it does not already exist.

2. Structure Definition for IPC

struct msg_buffer {
long msg_type;
char msg_text[100];
};

 msg_type: Used to identify message categories (important for filtering messages).


 msg_text: The actual message payload.

3. Sending a Message

if (msgsnd(msgid, &message, sizeof(message.msg_text), 0) == -1) {


perror("msgsnd failed");
exit(EXIT_FAILURE);
}

 msgsnd() sends a message to the queue.


 The message type must be a positive integer (used by the receiver to filter messages).
 0 → No special flags used.
4. Receiving a Message

if (msgrcv(msgid, &message, sizeof(message.msg_text), 1, 0) == -1) {


perror("msgrcv failed");
exit(EXIT_FAILURE);
}

 msgrcv() retrieves a message from the queue.


 The second parameter (1) ensures only messages with msg_type = 1 are received.

5. Deleting the Message Queue

if (msgctl(msgid, IPC_RMID, NULL) == -1) {


perror("msgctl failed");
exit(EXIT_FAILURE);
}

 msgctl() with IPC_RMID removes the message queue after receiving the message.
 This prevents orphaned message queues in the system.

Kernel Implications of IPC

1. How Kernel Handles Message Queues

 The kernel maintains message queues inside a system-wide message queue table.
 Each queue has an ID, permissions, and message structure.
 The msgget(), msgsnd(), and msgrcv() interact with the kernel IPC subsystem.

2. System Calls & Kernel Space Execution

 The following system calls interact with the kernel:


o msgget() → Creates/opens a message queue.
o msgsnd() → Copies data from user space to kernel space.
o msgrcv() → Copies data from kernel space to user space.
o msgctl() → Deletes the message queue.

🔹 The kernel schedules IPC operations, ensures synchronization, and avoids race
conditions.
3. Message Queue Memory Management

 Kernel allocates memory inside a dedicated IPC region.


 Each message has a header (msg_type, metadata) and a payload (msg_text).
 Messages persist until read or deleted explicitly (msgctl()).

4. Synchronization & Concurrency Issues

 If multiple processes try to read the same message, only one will receive it.
 msg_type can be used as a priority mechanism to control message order.
 Kernel uses semaphores & locks to prevent race conditions.

5. Security Implications

 Permissions (0666) allow any user to read/write the queue.


 Avoid unauthorized access by setting stricter permissions.
 Kernel maintains an audit log of IPC operations, preventing unauthorized
modifications.

How to Compile & Run

Step 1: Compile

gcc ipc_message_queue.c -o ipc

Step 2: Run Sender Process

./ipc
# Choose option 1 and enter a message

Step 3: Run Receiver Process

./ipc
# Choose option 2 to receive the message

Expected Output

Sender

Choose mode:
1. Send Message
2. Receive Message
1
Enter message to send: Hello from Process A
Message sent successfully!

Receiver

Choose mode:
1. Send Message
2. Receive Message
2
Received Message: Hello from Process A
Message queue deleted successfully.

Alternative IPC Mechanisms & Kernel Implications

1. Pipes

 Kernel buffers data temporarily between two processes.


 Data is lost if not read immediately.

2. Shared Memory

 Fastest IPC mechanism.


 Requires synchronization mechanisms (e.g., semaphores) to prevent race
conditions.

3. Sockets

 Used for network-based IPC (across different machines).


 Kernel manages socket buffers, ports, and connection states.

Conclusion

 This program demonstrates IPC using message queues.


 The kernel manages IPC resources, ensuring secure and synchronized
communication.
 Proper cleanup (msgctl()) is essential to prevent memory leaks.
Experiment-03: Write a C Program for Micro-Architectural Implications of IPC

The micro-architectural implications of Inter-Process Communication (IPC) deal with


how IPC mechanisms interact with the CPU cache hierarchy, context switching, memory
consistency, and synchronization.

For this, we'll implement Shared Memory IPC using POSIX shared memory (shm_open)
and semaphores (sem_t). This method is efficient because it avoids excessive kernel
involvement and reduces context switching overhead, making it ideal for analyzing micro-
architectural behavior.

C Program: Micro-Architectural Implications of IPC using Shared Memory

This program consists of:

1. Writer Process: Writes data into shared memory.


2. Reader Process: Reads the data from shared memory.
3. Synchronization using Semaphores: Ensures proper data access.

Code: Shared Memory IPC with Semaphores

#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h> // For O_* constants
#include <sys/mman.h> // For shared memory
#include <sys/stat.h> // For mode constants
#include <semaphore.h> // For semaphores
#include <unistd.h> // For fork(), sleep()
#include <string.h> // For strcpy()

#define SHARED_MEMORY_NAME "/ipc_shared_mem"


#define SEMAPHORE_NAME "/ipc_semaphore"
#define BUFFER_SIZE 128

struct shared_data {
char message[BUFFER_SIZE];
};

int main() {
int shm_fd;
struct shared_data *shared_memory;
sem_t *semaphore;

// Create or open shared memory


shm_fd = shm_open(SHARED_MEMORY_NAME, O_CREAT | O_RDWR, 0666);
if (shm_fd == -1) {
perror("Failed to create shared memory");
exit(EXIT_FAILURE);
}

// Set size of shared memory


if (ftruncate(shm_fd, sizeof(struct shared_data)) == -1) {
perror("Failed to set shared memory size");
exit(EXIT_FAILURE);
}

// Map shared memory into address space


shared_memory = (struct shared_data *)mmap(NULL, sizeof(struct shared_data),
PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
if (shared_memory == MAP_FAILED) {
perror("Failed to map shared memory");
exit(EXIT_FAILURE);
}

// Create or open semaphore


semaphore = sem_open(SEMAPHORE_NAME, O_CREAT, 0666, 0);
if (semaphore == SEM_FAILED) {
perror("Failed to create semaphore");
exit(EXIT_FAILURE);
}

pid_t pid = fork();


if (pid < 0) {
perror("Fork failed");
exit(EXIT_FAILURE);
}

if (pid == 0) {
// Child process (Reader)
printf("Reader: Waiting for data...\n");
sem_wait(semaphore); // Wait until writer process posts semaphore

printf("Reader: Received message -> %s\n", shared_memory->message);


// Cleanup shared memory and semaphore
munmap(shared_memory, sizeof(struct shared_data));
shm_unlink(SHARED_MEMORY_NAME);
sem_close(semaphore);
sem_unlink(SEMAPHORE_NAME);

} else {
// Parent process (Writer)
sleep(1); // Give reader time to start

strcpy(shared_memory->message, "Hello from Writer!");


printf("Writer: Message written to shared memory\n");

sem_post(semaphore); // Signal reader that data is ready

// Cleanup in writer process


munmap(shared_memory, sizeof(struct shared_data));
sem_close(semaphore);
}

return 0;
}

How This Code Works

1. Shared Memory (shm_open)


o Allows direct memory access between processes.
o Mapped into both processes’ virtual address space, reducing kernel
overhead.
2. Semaphores (sem_open)
o Used to synchronize access.
o sem_wait() makes the reader wait until the writer signals completion.
3. Processes (fork())
o Parent process acts as writer.
o Child process acts as reader.
4. Data Transfer
o Writer writes to shared memory.
o Reader reads the data after synchronization.

Micro-Architectural Implications

1. CPU Cache Behavior


 Shared memory bypasses excessive kernel interaction, but:
o Cache coherency protocols (e.g., MESI) ensure data consistency.
o Cache invalidation occurs when one process modifies shared memory,
leading to performance loss.

2. Context Switching Overhead

 Traditional IPC (e.g., pipes, message queues) requires system calls (read(), write()),
leading to expensive context switches.
 Shared memory IPC avoids system calls, reducing CPU mode switching
overhead.

3. Memory Consistency & Cache Synchronization

 Multiple cores use cache-line locking, leading to bus traffic and false sharing.
 The Linux kernel ensures atomic writes using memory barriers.
 Semaphores cause cache line invalidations, affecting L3 cache performance.

4. Performance Metrics

IPC Mechanism Latency Context Switches Kernel Involvement

Pipes High Frequent Heavy

Message Queue High Frequent Moderate

Shared Memory Low Minimal Low

How to Compile & Run

Step 1: Compile

gcc ipc_shared_memory.c -o ipc_shared -lrt -lpthread

Step 2: Run the Program

./ipc_shared

Expected Output

Reader: Waiting for data...


Writer: Message written to shared memory
Reader: Received message -> Hello from Writer!
Key Takeaways

 Shared memory IPC is micro-architecturally efficient because:


o It minimizes kernel overhead.
o It leverages cache locality, but can suffer from false sharing.
o It reduces context switching, improving CPU performance.

You might also like