异步代理库中的最佳做法
本文档介绍如何有效利用异步代理库。 代理库可提升用于粗粒度数据流和流水线操作任务的基于角色的编程模型和进程内消息传递。
有关代理库的更多信息,请参见异步代理库。
各节内容
本文档包含以下几节:
使用代理隔离状态
使用限制机制限制数据管道中的消息数量
不要在数据管道中执行细化工作
不要通过值传递大型消息负载
在未定义所有权的情况下在数据网络中使用 shared_ptr
使用代理隔离状态
代理库可提供共享状态的备选方案,方法是让您通过异步消息传递机制连接独立的组件。 如果异步代理将它们的内部状态与其他组件隔离,此时异步代理最有效。 通过隔离状态,多个组件通常不作用于共享的数据。 状态隔离可允许您的应用程序进行扩展,因为它能够减少对共享内存的争用现象。 状态隔离还减少死锁和争用情况发生的可能性,因为组件不必同步对共享数据的访问。
通过在代理类的 private 或 protected 部分容纳数据成员和使用消息缓冲区对状态更改进行通信,通常可以在代理中隔离状态。 下面的示例演示从 Concurrency::agent 派生的 basic_agent 类。 basic_agent 类使用两个消息缓冲区与外部组件进行通信。 一个消息缓冲区容纳传入消息;另一个消息缓冲区则容纳传出消息。
// basic-agent.cpp
// compile with: /c /EHsc
#include <agents.h>
// An agent that uses message buffers to isolate state and communicate
// with other components.
class basic_agent : public Concurrency::agent
{
public:
basic_agent(Concurrency::unbounded_buffer<int>& input)
: _input(input)
{
}
// Retrives the message buffer that holds output messages.
Concurrency::unbounded_buffer<int>& output()
{
return _output;
}
protected:
void run()
{
while (true)
{
// Read from the input message buffer.
int value = Concurrency::receive(_input);
// TODO: Do something with the value.
int result = value;
// Write the result to the output message buffer.
Concurrency::send(_output, result);
}
done();
}
private:
// Holds incoming messages.
Concurrency::unbounded_buffer<int>& _input;
// Holds outgoing messages.
Concurrency::unbounded_buffer<int> _output;
};
有关如何定义和使用代理的完整示例,请参见演练:创建基于代理的应用程序和演练:创建数据流代理。
[转到页首]
使用限制机制限制数据管道中的消息数量
许多消息缓冲区类型(例如 Concurrency::unbounded_buffer)可以容纳的消息数量没有限制。 当消息制造者将消息发送至数据管道的速度比使用者处理这些消息的速度要快时,应用程序可能进入内存低或内存不足的状态。 您可以使用限制机制(例如信号量)来限制数据管道中并发活动的消息的数量。
下面的基本示例演示如何使用信号量来限制数据管道中消息的数量。 数据管道使用 Concurrency::wait 函数模拟至少持续 100 毫秒的操作。 由于发送方生成消息的速度要快于使用者处理这些消息的速度,因此该示例定义 semaphore 类以允许应用程序限制活动消息的数量。
// message-throttling.cpp
// compile with: /EHsc
#include <Windows.h>
#include <agents.h>
#include <concrt.h>
#include <concurrent_queue.h>
#include <sstream>
#include <iostream>
using namespace Concurrency;
using namespace std;
// A semaphore type that uses cooperative blocking semantics.
class semaphore
{
public:
explicit semaphore(LONG capacity);
// Acquires access to the semaphore.
void acquire();
// Releases access to the semaphore.
void release();
private:
// The semaphore count.
LONG _semaphore_count;
// A concurrency-safe queue of contexts that must wait to
// acquire the semaphore.
concurrent_queue<Context*> _waiting_contexts;
};
// A synchronization primitive that is signaled when its
// count reaches zero.
class countdown_event
{
public:
countdown_event(unsigned int count = 0L);
// Decrements the event counter.
void signal();
// Increments the event counter.
void add_count();
// Blocks the current context until the event is set.
void wait();
private:
// The current count.
volatile long _current;
// The event that is set when the counter reaches zero.
event _event;
// Disable copy constructor.
countdown_event(const countdown_event&);
// Disable assignment.
countdown_event const & operator=(countdown_event const&);
};
int wmain()
{
// The number of messages to send to the consumer.
const int MessageCount = 5;
// The number of messages that can be active at the same time.
const long ActiveMessages = 2;
// Used to compute the elapsed time.
DWORD start_time;
// Computes the elapsed time, rounded-down to the nearest
// 100 milliseconds.
auto elapsed = [&start_time] {
return (GetTickCount() - start_time)/100*100;
};
// Limits the number of active messages.
semaphore s(ActiveMessages);
// Enables the consumer message buffer to coordinate completion
// with the main application.
countdown_event e(MessageCount);
// Create a data pipeline that has three stages.
// The first stage of the pipeline prints a message.
transformer<int, int> print_message([&elapsed](int n) -> int {
wstringstream ss;
ss << elapsed() << L": received " << n << endl;
wcout << ss.str();
// Send the input to the next pipeline stage.
return n;
});
// The second stage of the pipeline simulates a
// time-consuming operation.
transformer<int, int> long_operation([](int n) -> int {
wait(100);
// Send the input to the next pipeline stage.
return n;
});
// The third stage of the pipeline releases the semaphore
// and signals to the main appliation that the message has
// been processed.
call<int> release_and_signal([&](int unused) {
// Enable the sender to send the next message.
s.release();
// Signal that the message has been processed.
e.signal();
});
// Connect the pipeline.
print_message.link_target(&long_operation);
long_operation.link_target(&release_and_signal);
// Send several messages to the pipeline.
start_time = GetTickCount();
for(int i = 0; i < MessageCount; ++i)
{
// Acquire access to the semaphore.
s.acquire();
// Print the message to the console.
wstringstream ss;
ss << elapsed() << L": sending " << i << L"..." << endl;
wcout << ss.str();
// Send the message.
send(print_message, i);
}
// Wait for the consumer to process all messages.
e.wait();
}
//
// semaphore class implementation.
//
semaphore::semaphore(LONG capacity)
: _semaphore_count(capacity)
{
}
// Acquires access to the semaphore.
void semaphore::acquire()
{
// The capacity of the semaphore is exceeded when the semaphore count
// falls below zero. When this happens, add the current context to the
// back of the wait queue and block the current context.
if (InterlockedDecrement(&_semaphore_count) < 0)
{
_waiting_contexts.push(Context::CurrentContext());
Context::Block();
}
}
// Releases access to the semaphore.
void semaphore::release()
{
// If the semaphore count is negative, unblock the first waiting context.
if (InterlockedIncrement(&_semaphore_count) <= 0)
{
// A call to acquire might have decremented the counter, but has not
// yet finished adding the context to the queue.
// Create a spin loop that waits for the context to become available.
Context* waiting = NULL;
if (!_waiting_contexts.try_pop(waiting))
{
Context::Yield();
}
// Unblock the context.
waiting->Unblock();
}
}
//
// countdown_event class implementation.
//
countdown_event::countdown_event(unsigned int count)
: _current(static_cast<long>(count))
{
// Set the event if the initial count is zero.
if (_current == 0L)
_event.set();
}
// Decrements the event counter.
void countdown_event::signal() {
if(InterlockedDecrement(&_current) == 0L) {
_event.set();
}
}
// Increments the event counter.
void countdown_event::add_count() {
if(InterlockedIncrement(&_current) == 1L) {
_event.reset();
}
}
// Blocks the current context until the event is set.
void countdown_event::wait() {
_event.wait();
}
此示例产生下面的示例输出:
0: sending 0...
0: received 0
0: sending 1...
0: received 1
100: sending 2...
100: received 2
200: sending 3...
200: received 3
300: sending 4...
300: received 4
semaphore 对象限制管道最多同时处理两条消息。
此示例中的制造者发送给使用者的消息数相对比较少。 因此,本示例不演示潜在的内存低或内存不足情况。 但是,该机制在数据管道包含的消息数量相对较多时很有用。
有关如何创建本示例中使用的信号量类的更多信息,请参见如何:使用上下文类实现协作信号量。
[转到页首]
不要在数据管道中执行细化工作
代理库在数据管理执行的工作非常粗化时最有用。 例如,一个应用程序组件可能从文件或网络连接读取数据,并偶尔将该数据发送至另一个应用程序。 代理库用于传播消息的协议会导致消息传递机制比并行模式库 (PPL) 提供的任务并行构造产生更高的开销。 因此,确保数据管道执行的工作足够长可补偿此开销。
尽管数据管道在其任务为粗粒化时最有效,但数据管道的各个阶段可以使用 PPL 构造(例如任务组和并行算法)来执行更细化的工作。 有关在各个处理阶段使用细粒化并行的粗粒化数据网络的示例,请参见演练:创建图像处理网络。
[转到页首]
不要通过值传递大型消息负载
在某些情况下,运行时会创建它从一个消息缓冲区传递给另一个消息缓冲区的每条消息的副本。 例如,Concurrency::overwrite_buffer 类可将其接收的每条消息的副本提供给其各个目标。 当您使用消息传递函数(例如 Concurrency::send 和 Concurrency::receive)在消息缓冲区中写入和读取消息时,运行时也会创建消息数据的副本。 尽管此机制可帮助消除并发写入共享数据的风险,但当消息负载相对较大时,它可能会导致较低的内存性能。
当您传递具有较大负载的消息时,可以使用指针或引用来提升内存性能。 下面的示例将通过值传递大型消息与传递相同消息类型的指针进行比较。 此示例定义两种代理类型,producer 和 consumer,它们作用于 message_data 对象。 此示例将制造者将多个 message_data 对象发送至使用者所需的时间与制造者代理将多个 message_data 对象的指针发送至使用者所需的时间进行比较。
// message-payloads.cpp
// compile with: /EHsc
#include <Windows.h>
#include <agents.h>
#include <iostream>
using namespace Concurrency;
using namespace std;
// Calls the provided work function and returns the number of milliseconds
// that it takes to call that function.
template <class Function>
__int64 time_call(Function&& f)
{
__int64 begin = GetTickCount();
f();
return GetTickCount() - begin;
}
// A message structure that contains large payload data.
struct message_data
{
int id;
string source;
unsigned char binary_data[32768];
};
// A basic agent that produces values.
template <typename T>
class producer : public agent
{
public:
explicit producer(ITarget<T>& target, unsigned int message_count)
: _target(target)
, _message_count(message_count)
{
}
protected:
void run();
private:
// The target buffer to write to.
ITarget<T>& _target;
// The number of messages to send.
unsigned int _message_count;
};
// Template specialization for message_data.
template <>
void producer<message_data>::run()
{
// Send a number of messages to the target buffer.
while (_message_count > 0)
{
message_data message;
message.id = _message_count;
message.source = "Application";
send(_target, message);
--_message_count;
}
// Set the agent to the finished state.
done();
}
// Template specialization for message_data*.
template <>
void producer<message_data*>::run()
{
// Send a number of messages to the target buffer.
while (_message_count > 0)
{
message_data* message = new message_data;
message->id = _message_count;
message->source = "Application";
send(_target, message);
--_message_count;
}
// Set the agent to the finished state.
done();
}
// A basic agent that consumes values.
template <typename T>
class consumer : public agent
{
public:
explicit consumer(ISource<T>& source, unsigned int message_count)
: _source(source)
, _message_count(message_count)
{
}
protected:
void run();
private:
// The source buffer to read from.
ISource<T>& _source;
// The number of messages to receive.
unsigned int _message_count;
};
// Template specialization for message_data.
template <>
void consumer<message_data>::run()
{
// Receive a number of messages from the source buffer.
while (_message_count > 0)
{
message_data message = receive(_source);
--_message_count;
// TODO: Do something with the message.
// ...
}
// Set the agent to the finished state.
done();
}
template <>
void consumer<message_data*>::run()
{
// Receive a number of messages from the source buffer.
while (_message_count > 0)
{
message_data* message = receive(_source);
--_message_count;
// TODO: Do something with the message.
// ...
// Release the memory for the message.
delete message;
}
// Set the agent to the finished state.
done();
}
int wmain()
{
// The number of values for the producer agent to send.
const unsigned int count = 10000;
__int64 elapsed;
// Run the producer and consumer agents.
// This version uses message_data as the message payload type.
wcout << L"Using message_data..." << endl;
elapsed = time_call([count] {
// A message buffer that is shared by the agents.
unbounded_buffer<message_data> buffer;
// Create and start the producer and consumer agents.
producer<message_data> prod(buffer, count);
consumer<message_data> cons(buffer, count);
prod.start();
cons.start();
// Wait for the agents to finish.
agent::wait(&prod);
agent::wait(&cons);
});
wcout << L"took " << elapsed << L"ms." << endl;
// Run the producer and consumer agents a second time.
// This version uses message_data* as the message payload type.
wcout << L"Using message_data*..." << endl;
elapsed = time_call([count] {
// A message buffer that is shared by the agents.
unbounded_buffer<message_data*> buffer;
// Create and start the producer and consumer agents.
producer<message_data*> prod(buffer, count);
consumer<message_data*> cons(buffer, count);
prod.start();
cons.start();
// Wait for the agents to finish.
agent::wait(&prod);
agent::wait(&cons);
});
wcout << L"took " << elapsed << L"ms." << endl;
}
此示例产生下面的示例输出:
Using message_data...
took 437ms.
Using message_data*...
took 47ms.
使用指针的版本执行效果更佳,因为它不需要运行时创建其从制造者传递至使用者的每个 message_data 对象的完整副本。
[转到页首]
在未定义所有权的情况下在数据网络中使用 shared_ptr
当您在消息传递管道或网络中通过指针发送消息时,通常可在网络的前端为每个消息分配内存,并在网络的末端释放该内存。 尽管该机制通常都很好用,但在有些情况下却很难或无法使用。 例如,在数据网络包含多个终端节点的情况下。 这种情况下,没有明确的位置可释放消息的内存。
若要解决此问题,可以使用 std::shared_ptr 等机制,它可以使指针由多个组件所拥有。 当占有资源的最后一个 shared_ptr 对象被销毁时,也会释放该资源。
下面的示例演示如何使用 shared_ptr 在多个消息缓冲区之间共享指针值。 此示例将 Concurrency::overwrite_buffer 对象连接至三个 Concurrency::call 对象。 overwrite_buffer 类将消息提供给其各个目标。 由于在数据网络的末端有多个数据的所有者,因此该示例使用 shared_ptr 允许各个 call 对象共享消息的所有权。
// message-sharing.cpp
// compile with: /EHsc
#include <agents.h>
#include <iostream>
#include <sstream>
using namespace Concurrency;
using namespace std;
// A type that holds a resource.
class resource
{
public:
resource(int id) : _id(id)
{
wcout << L"Creating resource " << _id << L"..." << endl;
}
~resource()
{
wcout << L"Destroying resource " << _id << L"..." << endl;
}
// Retrieves the identifier for the resource.
int id() const { return _id; }
// TODO: Add additional members here.
private:
// An identifier for the resource.
int _id;
// TODO: Add additional members here.
};
int wmain()
{
// A message buffer that sends messages to each of its targets.
overwrite_buffer<shared_ptr<resource>> input;
// Create three call objects that each receive resource objects
// from the input message buffer.
call<shared_ptr<resource>> receiver1(
[](shared_ptr<resource> res) {
wstringstream ss;
ss << L"receiver1: received resource " << res->id() << endl;
wcout << ss.str();
},
[](shared_ptr<resource> res) {
return res != nullptr;
}
);
call<shared_ptr<resource>> receiver2(
[](shared_ptr<resource> res) {
wstringstream ss;
ss << L"receiver2: received resource " << res->id() << endl;
wcout << ss.str();
},
[](shared_ptr<resource> res) {
return res != nullptr;
}
);
event e;
call<shared_ptr<resource>> receiver3(
[&e](shared_ptr<resource> res) {
e.set();
},
[](shared_ptr<resource> res) {
return res == nullptr;
}
);
// Connect the call objects to the input message buffer.
input.link_target(&receiver1);
input.link_target(&receiver2);
input.link_target(&receiver3);
// Send a few messages through the network.
send(input, make_shared<resource>(42));
send(input, make_shared<resource>(64));
send(input, shared_ptr<resource>(nullptr));
// Wait for the receiver that accepts the nullptr value to
// receive its message.
e.wait();
}
此示例产生下面的示例输出:
Creating resource 42...
receiver1: received resource 42
Creating resource 64...
receiver2: received resource 42
receiver1: received resource 64
Destroying resource 42...
receiver2: received resource 64
Destroying resource 64...