Sample Target Environment Service Implementations of Data Communication Methods
Direct-Access Data Communication
To maximize performance of component code or if mutual exclusion is inherent in the component model design, use direct-access data communication. Generated function code communicates with other functions directly by using memory that platform services manage for the target execution environment. Platform services maintain memory persistence across time during one start and shutdown cycle.
This figure shows two generated callable functions that use direct-access data communication. The numeric callouts in the figure are keyed to the list that follows the figure.
At startup, the platform service allocates a buffer and initializes the buffer to 0. The buffer is persistent across time.
The service calls function F2 to receive data. The function reads value 0 from the buffer without safeguards.
The service calls function F1 to send data. The function writes the value 1 to the buffer. Direct access communication assumes that functions F1 and F2 do not access the buffer concurrently and a safeguard is not needed.
The service calls function F2. During its second execution, function F2 receives the new data. The function reads the value 1 from the buffer.
The service calls function F1. During its second execution, function F1 sends the value 2 to the buffer.
The service calls function F2. During its third execution, function F2 receives the value 2 by reading the new value from the buffer.
Functions F1 and F2 have direct access to memory for the duration of function execution. The platform service assumes that the functions do not access the buffer concurrently and applies no data concurrency safeguards.
Outside-Execution Data Communication
To favor memory optimizations over data freshness or when the platform service code is autogenerated based on full knowledge of your application, use outside-execution data communication. The platform communicates data with other functions outside (before and after) function execution. As component model designer, you accept how the platform applies safeguards for concurrent data access. For the duration of its execution, a function can access memory.
The platform communication services:
Allocate buffer and, at startup, initialize the buffer.
Lock the buffer at the beginning of function execution.
Unlock the buffer at the end of function execution.
Switch buffers to maintain memory exclusivity for function execution.
Maintain memory persistence during function execution within the context of a power cycle.
This figure shows platform communication services that use an outside-execution data communication algorithm.
Generated callable functions F1 and F2 send and receive data while executing. The platform services communicate the data before and after function execution.
Data that function F1 sends becomes accessible to function F2 after F1 completes execution.
The platform service locks the buffer and holds the data that function F2 receives constant while F2 executes.
F2 can access the received data as needed until execution completes.
This figure shows how a platform service might implement outside-execution communication by using a triple-buffer algorithm. The numeric callouts in the figure are keyed to the list that follows the figure.
With this algorithm, a platform service:
Allocates three buffers and initializes the buffers with value 0.
Calls generated callable function F2.
Locks the first buffer for F2. F2 can read from the locked buffer as needed while executing. Because data has not been sent to the buffer, the function receives the value 0.
Calls generated callable function F1.
Locks the second buffer for F1. F1 can write to the locked buffer as needed while executing. The function writes the value 1 to the buffer.
When F2 completes execution, unlocks the first buffer.
Calls F2.
Locks the third buffer for F2. Because the second buffer is locked for F1, the value 1 in that buffer is not accessible to F2. F2 receives the value that is in the third buffer, which is 0.
When F1 completes execution, unlocks the second buffer.
When F2 completes execution, unlocks the third buffer.
Calls F1 and F2.
Locks the second and third buffers. The service locks the second buffer so that F2 can receive the freshest data, which is the value 1. The service locks the third buffer so that F1 can write a new value to that buffer. F1 writes the value 2 to the third buffer.
When F1 completes execution, unlocks the third buffer.
When F2 completes execution, unlocks the second buffer. Because the service retains the lock on the second buffer until F2 completes execution, F2 receives the value 1 a second time.
During-Execution Data Communication
To favor data freshness over memory usage or when there is no aggregate service generation phase, use during-execution data communication. The platform service communicates data with other functions immediately during function execution. As component model designer, you accept how the service applies safeguards for concurrent data access. For the duration of its execution, a function can access memory. A function maintains value coherence during execution by using a local buffer.
The platform communication services:
Allocate a buffer and initializes the buffer to 0 at startup.
Lock the buffer during a receive or send operation.
Unlock the buffer immediately after reading a value from or writing a value to the buffer.
Switch buffers to maintain memory exclusivity for function execution.
Do not maintain memory persistence during a power cycle or during function execution.
During-execution communication can be nonblocking or blocking.
Nonblocking During-Execution Data Communication
This figure shows platform communication services that use a nonblocking during-execution data communication algorithm.
Data that generated callable function F1 sends becomes accessible to generated callable function F2 during the execution of F1. The receiver, function F2, uses local memory to hold the value constant during execution.
This figure shows how a platform service might implement nonblocking during-execution communication by using a triple-buffer approach. The numeric callouts in the figure are keyed to the list that follows the figure.
With this algorithm, a platform service:
Allocates three buffers and initializes the buffers with value 0.
Calls generated callable function F2.
Locks the first buffer for F2. F2 reads data from the buffer immediately.
When the read operation is complete, unlocks the buffer immediately. F2 cannot access the buffer again during execution. For additional read operations, F2 must preserve the value in local memory.
Calls generated callable function F1.
Locks the first buffer for F1. F1 writes the value 1 to the buffer immediately.
When the write operating is complete, unlocks the buffer immediately.
Calls F2.
Locks the buffer that contains the freshest data. The first buffer contains the freshest data, which is the value 1.
When the write operation is complete, unlocks the buffer immediately.
Calls F1 and F2.
For F2, locks the buffer that contains the freshest data. The first buffer contains the freshest data, which is the value 1. For F1, locks the buffer that contains the stalest data. The second buffer contains the stalest data, which is the value 2.
When the read and write operations are complete, unlocks the buffers immediately.
Blocking During-Execution Data Communication
This figure shows platform communication services that use a blocking during-execution data communication algorithm. A blocking algorithm must avoid live-lock and dead-lock concurrency issues.
Data that the generated callable function F1 sends becomes accessible to generated callable function F2 during the execution of F1. The sender, in this case F1, might block the receiver. The receiver, function F2, uses local memory to hold the value constant during execution. The service might maintain memory persistence for a power cycle.
This figure shows how service might implement blocking during-execution communication by using a single-buffer approach. The numeric callouts in the figure are keyed to the list that follows the figure.
With this algorithm, the a service:
Allocates a buffer and initializes the buffer with value 0.
Calls generated callable function F2.
Locks the buffer for F2. F2 reads the value in the buffer, 0, until a send occurs. For multiple read operations, F2 must preserve the value in local memory.
When a read operation is complete, unlocks the buffer immediately.
Calls generated callable function F1.
Locks the buffer for F1. F1 writes the value 1 to the buffer immediately.
When the write operation is complete, unlocks the buffer immediately.
Calls F2.
Locks the buffer for F2. F2 reads the value 1 from the buffer.
When the read operation is complete, unlocks the buffer immediately.
Calls F1 and F2.
Blocks F2 and locks the buffer for F1. F1 writes the value 2 to the buffer.
When the write operation is complete, unlocks the buffer immediately.
Unblocks F2 and locks the buffer for F2. F2 reads the value 2 from the buffer.
When the read operation is complete, unlocks the buffer immediately.