Anbox 实现分析 3:会话管理器与容器管理器的通信
Anbox 通過一個可執行文件,實現多個不同的應該用邏輯。在啟動 Anbox 可執行文件時,通過為它提供不同的命令行參數來確定具體執行哪個命令。Anbox 中這些不同的命令實例之間,整體的通信架構如下圖這樣:
這些不同的命令實例之間通信的過程大體如下:
- 容器管理器實例首先運行起來,監聽在特定位置的 Unix 域 Socket 上;
- 隨后會話管理器啟動,監聽在另外的一些 Unix 域 Socket 上;
- 會話管理器同時連接容器管理器監聽的 Unix 域 Socket 上的服務;
- 會話管理器與容器管理器通過 Unix 域 Socket 成功建立連接之后,會話管理器向容器管理器發送命令,請求容器管理器啟動 Android 容器;
- 容器管理器收到會話管理器發來的命令后,先給會話管理器一個響應,然后通過 LXC 啟動一個 Android 容器,并將會話管理器監聽的 Unix 域 Socket 的文件路徑映射進 Android 容器的 /dev/ 目錄下;
- Android 容器啟動之后,容器內的 Android 進程,通過映射進來的 Unix 域 Socket 與會話管理器建立連接;
- Android 容器啟動時,會話管理器與 ADB 守護進程建立連接;
- Anbox 的 install 和 launch 命令主要用于對 Android 容器做一些控制,它們分別用于向 Android 容器中安裝應用程序 APK 以及啟動容器內的特定 Activity,它們通過 D-Bus 與會話管理器通信。
在 Anbox 中,會話管理器和容器管理器之間是比較重要的一條通信通道。會話管理器和容器管理器之間通過 Unix 域 Socket 進行通信,容器管理器監聽在特定位置的 Unix 域 Socket 上,會話管理器發起與容器管理器之間的連接,連接建立之后,兩者通過這條連接進行通信。
容器管理器接受 RPC 調用
代碼層面,在容器管理器一端,通過 anbox::container::Service 啟動對 Unix 域 Socket 的監聽。anbox::container::Service 的定義(位于anbox/src/anbox/container/service.h)如下:
namespace anbox { namespace container { class Service : public std::enable_shared_from_this<Service> {public:static std::shared_ptr<Service> create(const std::shared_ptr<Runtime> &rt, bool privileged);~Service();private:Service(const std::shared_ptr<Runtime> &rt, bool privileged);int next_id();void new_client(std::shared_ptr<boost::asio::local::stream_protocol::socket> const &socket);std::shared_ptr<common::Dispatcher> dispatcher_;std::shared_ptr<network::PublishedSocketConnector> connector_;std::atomic<int> next_connection_id_;std::shared_ptr<network::Connections<network::SocketConnection>> connections_;std::shared_ptr<Container> backend_;bool privileged_; }; } // namespace container } // namespace anboxdispatcher_ 看起來沒有實際的用處。 anbox::container::Service 的實現(位于 anbox/src/anbox/container/service.cpp)如下:
namespace anbox { namespace container { std::shared_ptr<Service> Service::create(const std::shared_ptr<Runtime> &rt, bool privileged) {auto sp = std::shared_ptr<Service>(new Service(rt, privileged));auto wp = std::weak_ptr<Service>(sp);auto delegate_connector = std::make_shared<network::DelegateConnectionCreator<boost::asio::local::stream_protocol>>([wp](std::shared_ptr<boost::asio::local::stream_protocol::socket> const &socket) {if (auto service = wp.lock())service->new_client(socket);});const auto container_socket_path = SystemConfiguration::instance().container_socket_path();sp->connector_ = std::make_shared<network::PublishedSocketConnector>(container_socket_path, rt, delegate_connector);// Make sure others can connect to our socket::chmod(container_socket_path.c_str(), S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);DEBUG("Everything setup. Waiting for incoming connections.");return sp; }Service::Service(const std::shared_ptr<Runtime> &rt, bool privileged): dispatcher_(anbox::common::create_dispatcher_for_runtime(rt)),next_connection_id_(0),connections_(std::make_shared<network::Connections<network::SocketConnection>>()),privileged_(privileged) { }Service::~Service() {connections_->clear(); }int Service::next_id() { return next_connection_id_++; }void Service::new_client(std::shared_ptr<boost::asio::local::stream_protocol::socket> const&socket) {if (connections_->size() >= 1) {socket->close();return;}auto const messenger = std::make_shared<network::LocalSocketMessenger>(socket);DEBUG("Got connection from pid %d", messenger->creds().pid());auto pending_calls = std::make_shared<rpc::PendingCallCache>();auto rpc_channel = std::make_shared<rpc::Channel>(pending_calls, messenger);auto server = std::make_shared<container::ManagementApiSkeleton>(pending_calls, std::make_shared<LxcContainer>(privileged_, messenger->creds()));auto processor = std::make_shared<container::ManagementApiMessageProcessor>(messenger, pending_calls, server);auto const &connection = std::make_shared<network::SocketConnection>(messenger, messenger, next_id(), connections_, processor);connection->set_name("container-service");connections_->add(connection);connection->read_next_message(); } } // namespace container } // namespace anbox在 anbox::container::Service 的構造函數中,通過 anbox::network::PublishedSocketConnector 及 anbox::network::DelegateConnectionCreator 等組件,啟動對 Unix 域 Socket 的監聽。Anbox 中處理 Unix 域 Socket 監聽的基本方法/模型,請參考 Anbox 實現分析 2:I/O 模型 一文中的相關部分。
anbox::container::Service 通過 anbox::network::Connections 和
anbox::network::SocketConnection 等管理新接受的連接,它限制只與一個會話管理器實例建立一條連接。anbox::container::Service 將處理收到的消息的組件 anbox::container::ManagementApiMessageProcessor 與底層的連接粘起來。
Anbox 的容器管理器和會話管理器通過基于 Protobuf 設計的 RPC 進行通信。anbox::container::Service 中處理收到的消息及接受 RPC 調用的相關組件的設計框架如下:
在 Anbox 的設計中,anbox::rpc::Channel 及 anbox::rpc::PendingCallCache 本來主要用于 RPC 調用發起端的消息收發,但在 anbox::container::Service::new_client() 中,同樣為新連接創建了這兩個類的對象,這顯得有點多次一舉。
anbox::container::Service 通過 anbox::network::SocketConnection 收到消息之后,首先交給 anbox::rpc::MessageProcessor 的 process_data() 處理。
anbox::rpc::MessageProcessor 的定義(位于 anbox/src/anbox/rpc/message_processor.h)如下:
class MessageProcessor : public network::MessageProcessor {public:MessageProcessor(const std::shared_ptr<network::MessageSender>& sender,const std::shared_ptr<PendingCallCache>& pending_calls);~MessageProcessor();bool process_data(const std::vector<std::uint8_t>& data) override;void send_response(::google::protobuf::uint32 id,google::protobuf::MessageLite* response);virtual void dispatch(Invocation const&) {}virtual void process_event_sequence(const std::string&) {}private:std::shared_ptr<network::MessageSender> sender_;std::vector<std::uint8_t> buffer_;std::shared_ptr<PendingCallCache> pending_calls_; };anbox::rpc::MessageProcessor 的實現(位于 anbox/src/anbox/rpc/message_processor.cpp)如下:
MessageProcessor::MessageProcessor(const std::shared_ptr<network::MessageSender> &sender,const std::shared_ptr<PendingCallCache> &pending_calls): sender_(sender), pending_calls_(pending_calls) {}MessageProcessor::~MessageProcessor() {}bool MessageProcessor::process_data(const std::vector<std::uint8_t> &data) {for (const auto &byte : data) buffer_.push_back(byte);while (buffer_.size() > 0) {const auto high = buffer_[0];const auto low = buffer_[1];size_t const message_size = (high << 8) + low;const auto message_type = buffer_[2];// If we don't have yet all bytes for a new message return and wait// until we have all.if (buffer_.size() - header_size < message_size) break;if (message_type == MessageType::invocation) {anbox::protobuf::rpc::Invocation raw_invocation;raw_invocation.ParseFromArray(buffer_.data() + header_size, message_size);dispatch(Invocation(raw_invocation));} else if (message_type == MessageType::response) {auto result = make_protobuf_object<protobuf::rpc::Result>();result->ParseFromArray(buffer_.data() + header_size, message_size);if (result->has_id()) {pending_calls_->populate_message_for_result(*result,[&](google::protobuf::MessageLite *result_message) {result_message->ParseFromString(result->response());});pending_calls_->complete_response(*result);}for (int n = 0; n < result->events_size(); n++)process_event_sequence(result->events(n));}buffer_.erase(buffer_.begin(),buffer_.begin() + header_size + message_size);}return true; }void MessageProcessor::send_response(::google::protobuf::uint32 id,google::protobuf::MessageLite *response) {VariableLengthArray<serialization_buffer_size> send_response_buffer(static_cast<size_t>(response->ByteSize()));response->SerializeWithCachedSizesToArray(send_response_buffer.data());anbox::protobuf::rpc::Result send_response_result;send_response_result.set_id(id);send_response_result.set_response(send_response_buffer.data(),send_response_buffer.size());send_response_buffer.resize(send_response_result.ByteSize());send_response_result.SerializeWithCachedSizesToArray(send_response_buffer.data());const size_t size = send_response_buffer.size();const unsigned char header_bytes[header_size] = {static_cast<unsigned char>((size >> 8) & 0xff),static_cast<unsigned char>((size >> 0) & 0xff), MessageType::response,};std::vector<std::uint8_t> send_buffer(sizeof(header_bytes) + size);std::copy(header_bytes, header_bytes + sizeof(header_bytes),send_buffer.begin());std::copy(send_response_buffer.data(),send_response_buffer.data() + send_response_buffer.size(),send_buffer.begin() + sizeof(header_bytes));sender_->send(reinterpret_cast<const char *>(send_buffer.data()),send_buffer.size()); }在會話管理器與容器管理器之間的 RPC 通信中,anbox::rpc::MessageProcessor 是一個同時用于 RPC 調用發起端和接受端的組件。容器管理器作為 RPC 調用的接受端,接收發自于會話管理器的類型為 MessageType::invocation 的消息。
會話管理器與容器管理器之間的 RPC 通信的消息格式為:[3 個字節的消息頭] + [經由 Protobuf MessageLite 對象序列化得到的消息體],其中消息頭的前兩個字節為 16 位的消息體長度的大尾端表示,第 3 個字節為消息的類型。RPC 消息的具體定義在 anbox/src/anbox/protobuf/anbox_rpc.proto 文件中:
option optimize_for = LITE_RUNTIME;package anbox.protobuf.rpc;message Invocation {required uint32 id = 1;required string method_name = 2;required bytes parameters = 3;required uint32 protocol_version = 4; }message Result {optional uint32 id = 1;optional bytes response = 2;repeated bytes events = 3; }message StructuredError {optional uint32 domain = 1;optional uint32 code = 2; }message Void {optional string error = 127;optional StructuredError structured_error = 128; }Invocation 消息用于發起 RPC 調用,Result、Void 和 StructuredError 用于返回響應或錯誤消息。
對于容器管理器而言,anbox::rpc::MessageProcessor 在其 process_data() 中首先提取消息頭,得到消息體的長度和類型,然后提取消息體并反序列化得到 Protobuf 消息 anbox::protobuf::rpc::Invocation,隨后將該 Protobuf 消息封裝為 anbox::rpc::Invocation 類的對象,并調用 dispatch(Invocation const&) 將消息派發出去。
anbox::rpc::Invocation 類的定義(位于 anbox/src/anbox/rpc/message_processor.h 中)如下:
class Invocation {public:Invocation(anbox::protobuf::rpc::Invocation const& invocation): invocation_(invocation) {}const ::std::string& method_name() const;const ::std::string& parameters() const;google::protobuf::uint32 id() const;private:anbox::protobuf::rpc::Invocation const& invocation_; };anbox::rpc::Invocation 類的實現(位于 anbox/src/anbox/rpc/message_processor.cpp 中)如下:
const ::std::string &Invocation::method_name() const {return invocation_.method_name(); }const ::std::string &Invocation::parameters() const {return invocation_.parameters(); }google::protobuf::uint32 Invocation::id() const { return invocation_.id(); }anbox::rpc::Invocation 類只是對 anbox::protobuf::rpc::Invocation 的簡單包裝。
anbox::rpc::MessageProcessor 的 dispatch(Invocation const&) 是一個虛函數,其實際的實現位于 ManagementApiMessageProcessor 中。anbox::container::ManagementApiMessageProcessor 的定義(位于 anbox/src/anbox/container/management_api_message_processor.h 中)如下:
namespace anbox { namespace container { class ManagementApiSkeleton; class ManagementApiMessageProcessor : public rpc::MessageProcessor {public:ManagementApiMessageProcessor(const std::shared_ptr<network::MessageSender> &sender,const std::shared_ptr<rpc::PendingCallCache> &pending_calls,const std::shared_ptr<ManagementApiSkeleton> &server);~ManagementApiMessageProcessor();void dispatch(rpc::Invocation const &invocation) override;void process_event_sequence(const std::string &event) override;private:std::shared_ptr<ManagementApiSkeleton> server_; }; } // namespace container } // namespace anboxanbox::container::ManagementApiMessageProcessor 的實現(位于 anbox/src/anbox/container/management_api_message_processor.cpp 中)如下:
namespace anbox { namespace container { ManagementApiMessageProcessor::ManagementApiMessageProcessor(const std::shared_ptr<network::MessageSender> &sender,const std::shared_ptr<rpc::PendingCallCache> &pending_calls,const std::shared_ptr<ManagementApiSkeleton> &server): rpc::MessageProcessor(sender, pending_calls), server_(server) {}ManagementApiMessageProcessor::~ManagementApiMessageProcessor() {}void ManagementApiMessageProcessor::dispatch(rpc::Invocation const &invocation) {if (invocation.method_name() == "start_container")invoke(this, server_.get(), &ManagementApiSkeleton::start_container, invocation);else if (invocation.method_name() == "stop_container")invoke(this, server_.get(), &ManagementApiSkeleton::stop_container, invocation); }void ManagementApiMessageProcessor::process_event_sequence(const std::string &) {} } // namespace container } // namespace anboxanbox::container::ManagementApiMessageProcessor 的實現很簡單,只支持兩種 RPC 調用,分別為啟動 Android 容器和停止 Android 容器,在它的 dispatch() 函數中,根據方法名,調用對應的函數。
函數調用通過一個函數模板 invoke() 完成,該函數模板定義(位于 anbox/src/anbox/rpc/template_message_processor.h)如下:
namespace anbox { namespace rpc { // Utility metafunction result_ptr_t<> allows invoke() to pick the right // send_response() overload. The base template resolves to the prototype // "send_response(::google::protobuf::uint32 id, ::google::protobuf::Message* // response)" // Client code may specialize result_ptr_t to resolve to another overload. template <typename ResultType> struct result_ptr_t {typedef ::google::protobuf::MessageLite* type; };// Boiler plate for unpacking a parameter message, invoking a server function, // and // sending the result message. Assumes the existence of Self::send_response(). template <class Self, class Bridge, class BridgeX, class ParameterMessage,class ResultMessage> void invoke(Self* self, Bridge* rpc,void (BridgeX::*function)(ParameterMessage const* request,ResultMessage* response,::google::protobuf::Closure* done),Invocation const& invocation) {ParameterMessage parameter_message;if (!parameter_message.ParseFromString(invocation.parameters()))throw std::runtime_error("Failed to parse message parameters!");ResultMessage result_message;try {std::unique_ptr<google::protobuf::Closure> callback(google::protobuf::NewPermanentCallback<Self, ::google::protobuf::uint32,typename result_ptr_t<ResultMessage>::type>(self, &Self::send_response, invocation.id(), &result_message));(rpc->*function)(¶meter_message, &result_message, callback.get());} catch (std::exception const& x) {result_message.set_error(std::string("Error processing request: ") +x.what());self->send_response(invocation.id(), &result_message);} } } // namespace rpc } // namespace anbox直接啟動和停止 Android 容器的職責,由 anbox::container::ManagementApiSkeleton 完成,這個類的定義(位于 anbox/src/anbox/container/management_api_skeleton.h)如下:
class Container; class ManagementApiSkeleton {public:ManagementApiSkeleton(const std::shared_ptr<rpc::PendingCallCache> &pending_calls,const std::shared_ptr<Container> &container);~ManagementApiSkeleton();void start_container(anbox::protobuf::container::StartContainer const *request,anbox::protobuf::rpc::Void *response, google::protobuf::Closure *done);void stop_container(anbox::protobuf::container::StopContainer const *request,anbox::protobuf::rpc::Void *response, google::protobuf::Closure *done);private:std::shared_ptr<rpc::PendingCallCache> pending_calls_;std::shared_ptr<Container> container_; };這個類的定義很簡單,其實現(位于 anbox/src/anbox/container/management_api_skeleton.cpp)如下:
namespace anbox { namespace container { ManagementApiSkeleton::ManagementApiSkeleton(const std::shared_ptr<rpc::PendingCallCache> &pending_calls,const std::shared_ptr<Container> &container): pending_calls_(pending_calls), container_(container) {}ManagementApiSkeleton::~ManagementApiSkeleton() {}void ManagementApiSkeleton::start_container(anbox::protobuf::container::StartContainer const *request,anbox::protobuf::rpc::Void *response, google::protobuf::Closure *done) {if (container_->state() == Container::State::running) {response->set_error("Container is already running");done->Run();return;}Configuration container_configuration;const auto configuration = request->configuration();for (int n = 0; n < configuration.bind_mounts_size(); n++) {const auto bind_mount = configuration.bind_mounts(n);container_configuration.bind_mounts.insert({bind_mount.source(), bind_mount.target()});}try {container_->start(container_configuration);} catch (std::exception &err) {response->set_error(utils::string_format("Failed to start container: %s", err.what()));}done->Run(); }void ManagementApiSkeleton::stop_container(anbox::protobuf::container::StopContainer const *request,anbox::protobuf::rpc::Void *response, google::protobuf::Closure *done) {(void)request;if (container_->state() != Container::State::running) {response->set_error("Container is not running");done->Run();return;}try {container_->stop();} catch (std::exception &err) {response->set_error(utils::string_format("Failed to stop container: %s", err.what()));}done->Run(); } } // namespace container } // namespace anboxanbox::container::ManagementApiSkeleton 通過 Container 類啟動或停止 Android 容器。配合函數模板 invoke() 的定義,及 Protobuf 的相關方法實現,不難理解, start_container() 和 stop_container() 函數的參數消息,在 invoke() 函數中由 Invocation 消息的參數字段的字節數組反序列化得到,這兩個函數的執行過程,都是向 response 參數中填入返回給調用者的響應,并通過 done->Run() 將響應通過 ManagementApiMessageProcessor::send_response() 函數,即
anbox::rpc::MessageProcessor::send_response() 函數發送回調用端。
在 anbox::rpc::MessageProcessor::send_response() 函數中,先將響應序列化,然后將序列化之后的響應放進 anbox::protobuf::rpc::Result Protobuf 消息中,最后再將 anbox::protobuf::rpc::Result 包裝為 Anbox 的 RPC 消息發送回調用端。
anbox::container::ManagementApiSkeleton 的 pending_calls_ 似乎也沒有實際的用處。
至此整個 RPC 調用接受處理流程結束。整個流程如下圖所示:
會話管理器發起 RPC 調用
在 Anbox 的會話管理器中,通過 anbox::container::Client 發起與容器管理器之間的連接,并處理雙方之間的 RPC 通信,這個類的定義(位于 anbox/src/anbox/container/client.h)如下:
class Client {public:typedef std::function<void()> TerminateCallback;Client(const std::shared_ptr<Runtime> &rt);~Client();void start(const Configuration &configuration);void stop();void register_terminate_handler(const TerminateCallback &callback);private:void read_next_message();void on_read_size(const boost::system::error_code &ec,std::size_t bytes_read);std::shared_ptr<network::LocalSocketMessenger> messenger_;std::shared_ptr<rpc::PendingCallCache> pending_calls_;std::shared_ptr<rpc::Channel> rpc_channel_;std::shared_ptr<ManagementApiStub> management_api_;std::shared_ptr<rpc::MessageProcessor> processor_;std::array<std::uint8_t, 8192> buffer_;TerminateCallback terminate_callback_; };anbox::container::Client 主要向外部暴露了兩個接口,一是啟動容器,二是停止容器,SessionManager 通過這兩個接口來控制容器的啟動與停止。anbox::container::Client 類的實現(位于 anbox/src/anbox/container/client.cpp)如下:
Client::Client(const std::shared_ptr<Runtime> &rt): messenger_(std::make_shared<network::LocalSocketMessenger>(SystemConfiguration::instance().container_socket_path(), rt)),pending_calls_(std::make_shared<rpc::PendingCallCache>()),rpc_channel_(std::make_shared<rpc::Channel>(pending_calls_, messenger_)),management_api_(std::make_shared<ManagementApiStub>(rpc_channel_)),processor_(std::make_shared<rpc::MessageProcessor>(messenger_, pending_calls_)) {read_next_message(); }Client::~Client() {}void Client::start(const Configuration &configuration) {try {management_api_->start_container(configuration);} catch (const std::exception &e) {ERROR("Failed to start container: %s", e.what());if (terminate_callback_)terminate_callback_();} }void Client::stop() {management_api_->stop_container(); }void Client::register_terminate_handler(const TerminateCallback &callback) {terminate_callback_ = callback; }void Client::read_next_message() {auto callback = std::bind(&Client::on_read_size, this, std::placeholders::_1,std::placeholders::_2);messenger_->async_receive_msg(callback, ba::buffer(buffer_)); }void Client::on_read_size(const boost::system::error_code &error,std::size_t bytes_read) {if (error) {if (terminate_callback_)terminate_callback_();return;}std::vector<std::uint8_t> data(bytes_read);std::copy(buffer_.data(), buffer_.data() + bytes_read, data.data());if (processor_->process_data(data)) read_next_message(); }anbox::container::Client 類在其構造函數中,即通過 Unix 域 Socket 建立了與容器管理器的連接,它通過 ManagementApiStub 發起 RPC 調用。ManagementApiStub 是容器管理器與會話管理器間 RPC 進程間通信在 RPC 調用發起端的接口層,它提供了 啟動 Android 容器 及 關閉 Android 容器 這樣的抽象。在 ManagementApiStub 之下,是容器管理器與會話管理器間 RPC 進程間通信的 RPC 層,即 anbox::rpc::Channel,主要用于處理消息的發送。
anbox::container::Client 類本身處理連接中原始數據的接收,這里直接用了裸 SocketMessenger,而沒有再用 SocketConnection 封裝。anbox::container::Client 收到數據之后,會將數據丟給 anbox::rpc::MessageProcessor 處理。類型為anbox::rpc::PendingCallCache 的 pending_calls_ 主要用于處理 RPC 的異步調用。在 anbox::rpc::Channel 中,消息發送之后,并不會等待響應的接收,而是在 pending_calls_ 中為 RPC 調用注冊一個完成回調。在 anbox::rpc::MessageProcessor 中收到響應的消息之后,前面的完成回調被調用,RPC 調用的發起者得到通知。
anbox::container::Client 中處理 RPC 調用的發起的相關組件的設計框架如下:
anbox::container::Client 直接使用 anbox::container::ManagementApiStub 執行 RPC 調用,這個類的定義(位于 anbox/src/anbox/container/management_api_stub.h)如下:
class ManagementApiStub : public DoNotCopyOrMove {public:ManagementApiStub(const std::shared_ptr<rpc::Channel> &channel);~ManagementApiStub();void start_container(const Configuration &configuration);void stop_container();private:template <typename Response>struct Request {Request() : response(std::make_shared<Response>()), success(true) {}std::shared_ptr<Response> response;bool success;common::WaitHandle wh;};void container_started(Request<protobuf::rpc::Void> *request);void container_stopped(Request<protobuf::rpc::Void> *request);mutable std::mutex mutex_;std::shared_ptr<rpc::Channel> channel_; };anbox::container::ManagementApiStub 定義了啟動容器和停止容器的接口,并定義了容器啟動完成和容器停止完成之后的回調,它還定義了 Request 類,用于封裝請求的響應,及一個 WaitHandle。WaitHandle 由 RPC 調用的發起端用于等待請求的結束。
anbox::container::ManagementApiStub 類的實現(位于 anbox/src/anbox/container/management_api_stub.cpp)如下:
ManagementApiStub::ManagementApiStub(const std::shared_ptr<rpc::Channel> &channel): channel_(channel) {}ManagementApiStub::~ManagementApiStub() {}void ManagementApiStub::start_container(const Configuration &configuration) {auto c = std::make_shared<Request<protobuf::rpc::Void>>();protobuf::container::StartContainer message;auto message_configuration = new protobuf::container::Configuration;for (const auto &item : configuration.bind_mounts) {auto bind_mount_message = message_configuration->add_bind_mounts();bind_mount_message->set_source(item.first);bind_mount_message->set_target(item.second);}message.set_allocated_configuration(message_configuration);{std::lock_guard<decltype(mutex_)> lock(mutex_);c->wh.expect_result();}channel_->call_method("start_container", &message, c->response.get(),google::protobuf::NewCallback(this, &ManagementApiStub::container_started, c.get()));c->wh.wait_for_all();if (c->response->has_error()) throw std::runtime_error(c->response->error()); }void ManagementApiStub::container_started(Request<protobuf::rpc::Void> *request) {request->wh.result_received(); }void ManagementApiStub::stop_container() {auto c = std::make_shared<Request<protobuf::rpc::Void>>();protobuf::container::StopContainer message;message.set_force(false);{std::lock_guard<decltype(mutex_)> lock(mutex_);c->wh.expect_result();}channel_->call_method("stop_container", &message, c->response.get(),google::protobuf::NewCallback(this, &ManagementApiStub::container_stopped, c.get()));c->wh.wait_for_all();if (c->response->has_error()) throw std::runtime_error(c->response->error()); }void ManagementApiStub::container_stopped(Request<protobuf::rpc::Void> *request) {request->wh.result_received(); }盡管實際的 RPC 調用是異步的,但 anbox::container::ManagementApiStub 類通過條件變量為其調用者提供了一種同步執行的假象。啟動容器和停止容器的行為通過另外的 Protobuf 消息來描述,這些消息的定義(位于 anbox/src/anbox/protobuf/anbox_container.proto)如下:
package anbox.protobuf.container;message Configuration {message BindMount {required string source = 1;required string target = 2;}repeated BindMount bind_mounts = 1; }message StartContainer {required Configuration configuration = 1; }message StopContainer {optional bool force = 1; }在 ManagementApiStub::start_container() 和 ManagementApiStub::stop_container() 函數中,將參數封裝進對應的 Protobuf 消息中,然后更新 Request 的 WaitHandle 中用于表示期待接收到的響應的狀態,隨后通過 anbox::rpc::Channel 發起 RPC 調用并注冊完成回調,最后等待在 Request 的 WaitHandle 上。
啟動容器和停止容器的 RPC 調用完成之后,對應的回調被調用,它們通過相應的請求的 WaitHandle 通知調用結束,ManagementApiStub::start_container() 和 ManagementApiStub::stop_container() 函數返回。
ManagementApiStub 的設計實際上有幾處問題。首先是定義的 mutex_ 成員,看上去毫無意義;其次是等待的方法 wait_for_all(),這個函數會一直等待條件成立,如果容器管理器進程意外終止,或者由于其它什么原因,無法給會話管理器發回響應消息,則會話管理器會一直等在那里無法結束,正確的做法應該用有超時的等待,等待一段時間之后,就假設啟動容器失敗,并退出。
可以看一下 WaitHandle 的設計與實現。這個類的定義(位于 anbox/src/anbox/common/wait_handle.h)如下:
namespace anbox { namespace common { struct WaitHandle {public:WaitHandle();~WaitHandle();void expect_result();void result_received();void wait_for_all();void wait_for_one();void wait_for_pending(std::chrono::milliseconds limit);bool has_result();bool is_pending();private:std::mutex guard;std::condition_variable wait_condition;int expecting;int received; }; } // namespace common } // namespace anboxWaitHandle 封裝標準庫的 std::mutex 和 std::condition_variable 來構造等待設施。這個類的實現(位于 anbox/src/anbox/common/wait_handle.cpp)如下:
namespace anbox { namespace common { WaitHandle::WaitHandle(): guard(), wait_condition(), expecting(0), received(0) {}WaitHandle::~WaitHandle() {}void WaitHandle::expect_result() {std::lock_guard<std::mutex> lock(guard);expecting++; }void WaitHandle::result_received() {std::lock_guard<std::mutex> lock(guard);received++;wait_condition.notify_all(); }void WaitHandle::wait_for_all() // wait for all results you expect {std::unique_lock<std::mutex> lock(guard);wait_condition.wait(lock, [&] { return received == expecting; });received = 0;expecting = 0; }void WaitHandle::wait_for_pending(std::chrono::milliseconds limit) {std::unique_lock<std::mutex> lock(guard);wait_condition.wait_for(lock, limit, [&] { return received == expecting; }); }void WaitHandle::wait_for_one() // wait for any single result {std::unique_lock<std::mutex> lock(guard);wait_condition.wait(lock, [&] { return received != 0; });--received;--expecting; }bool WaitHandle::has_result() {std::lock_guard<std::mutex> lock(guard);return received > 0; }bool WaitHandle::is_pending() {std::unique_lock<std::mutex> lock(guard);return expecting > 0 && received != expecting; } } // namespace common } // namespace anbox需要等待的一端,通過調用 expect_result() 來告訴 WaitHandle,需要等待多接收一個響應,并通過 wait_for_all()、wait_for_pending() 和 wait_for_one() 來等待結果的出現。處理收到的消息的線程,通過 result_received() 通知等待的線程。
anbox::rpc::PendingCallCache 是一個容器,用于保存已經發送了請求消息,已經發起但還沒有得到響應的 RPC 調用的描述及完成回調,這個類的定義(位于 anbox/src/anbox/rpc/pending_call_cache.h)如下:
class PendingCallCache {public:PendingCallCache();void save_completion_details(anbox::protobuf::rpc::Invocation const &invocation,google::protobuf::MessageLite *response,google::protobuf::Closure *complete);void populate_message_for_result(anbox::protobuf::rpc::Result &result,std::function<void(google::protobuf::MessageLite *)> const &populator);void complete_response(anbox::protobuf::rpc::Result &result);void force_completion();bool empty() const;private:struct PendingCall {PendingCall(google::protobuf::MessageLite *response,google::protobuf::Closure *target): response(response), complete(target) {}PendingCall() : response(0), complete() {}google::protobuf::MessageLite *response;google::protobuf::Closure *complete;};std::mutex mutable mutex_;std::map<int, PendingCall> pending_calls_; }; } // namespace rpc } // namespace anboxanbox::rpc::PendingCallCache 類還定義一個 PendingCall 用于封裝請求的響應對象及完成回調,它用一個 map 保存 PendingCall,由于需要在 anbox::rpc::MessageProcessor::process_data() 和 anbox::rpc::Channel 的線程中訪問,為了線程安全計,每次訪問都有鎖進行保護。
anbox::rpc::PendingCallCache 類的實現(位于 anbox/src/anbox/rpc/pending_call_cache.cpp)如下:
namespace anbox { namespace rpc { PendingCallCache::PendingCallCache() {}void PendingCallCache::save_completion_details(anbox::protobuf::rpc::Invocation const& invocation,google::protobuf::MessageLite* response,google::protobuf::Closure* complete) {std::unique_lock<std::mutex> lock(mutex_);pending_calls_[invocation.id()] = PendingCall(response, complete); }void PendingCallCache::populate_message_for_result(anbox::protobuf::rpc::Result& result,std::function<void(google::protobuf::MessageLite*)> const& populator) {std::unique_lock<std::mutex> lock(mutex_);populator(pending_calls_.at(result.id()).response); }void PendingCallCache::complete_response(anbox::protobuf::rpc::Result& result) {PendingCall completion;{std::unique_lock<std::mutex> lock(mutex_);auto call = pending_calls_.find(result.id());if (call != pending_calls_.end()) {completion = call->second;pending_calls_.erase(call);}}if (completion.complete) completion.complete->Run(); }void PendingCallCache::force_completion() {std::unique_lock<std::mutex> lock(mutex_);for (auto& call : pending_calls_) {auto& completion = call.second;completion.complete->Run();}pending_calls_.erase(pending_calls_.begin(), pending_calls_.end()); }bool PendingCallCache::empty() const {std::unique_lock<std::mutex> lock(mutex_);return pending_calls_.empty(); } } // namespace rpc } // namespace anboxsave_completion_details() 用于向 anbox::rpc::PendingCallCache 中放入調用,populate_message_for_result() 用于把返回的響應消息塞給調用,complete_response() 則用于通知結果的返回,調用對應的完成回調。
anbox::rpc::Channel 用于序列化消息,并發送出去,其定義(位于 anbox/src/anbox/rpc/channel.h)如下:
class Channel {public:Channel(const std::shared_ptr<PendingCallCache> &pending_calls,const std::shared_ptr<network::MessageSender> &sender);~Channel();void call_method(std::string const &method_name,google::protobuf::MessageLite const *parameters,google::protobuf::MessageLite *response,google::protobuf::Closure *complete);void send_event(google::protobuf::MessageLite const &event);private:protobuf::rpc::Invocation invocation_for(std::string const &method_name,google::protobuf::MessageLite const *request);void send_message(const std::uint8_t &type,google::protobuf::MessageLite const &message);std::uint32_t next_id();void notify_disconnected();std::shared_ptr<PendingCallCache> pending_calls_;std::shared_ptr<network::MessageSender> sender_;std::mutex write_mutex_; };anbox::rpc::Channel 負責為每個調用消息分配 ID。anbox::rpc::Channel 實現(位于 anbox/src/anbox/rpc/channel.cpp)如下:
namespace anbox { namespace rpc { Channel::Channel(const std::shared_ptr<PendingCallCache> &pending_calls,const std::shared_ptr<network::MessageSender> &sender): pending_calls_(pending_calls), sender_(sender) {}Channel::~Channel() {}void Channel::call_method(std::string const &method_name,google::protobuf::MessageLite const *parameters,google::protobuf::MessageLite *response,google::protobuf::Closure *complete) {auto const &invocation = invocation_for(method_name, parameters);pending_calls_->save_completion_details(invocation, response, complete);send_message(MessageType::invocation, invocation); }void Channel::send_event(google::protobuf::MessageLite const &event) {VariableLengthArray<2048> buffer{static_cast<size_t>(event.ByteSize())};event.SerializeWithCachedSizesToArray(buffer.data());anbox::protobuf::rpc::Result response;response.add_events(buffer.data(), buffer.size());send_message(MessageType::response, response); }protobuf::rpc::Invocation Channel::invocation_for(std::string const &method_name,google::protobuf::MessageLite const *request) {anbox::VariableLengthArray<2048> buffer{static_cast<size_t>(request->ByteSize())};request->SerializeWithCachedSizesToArray(buffer.data());anbox::protobuf::rpc::Invocation invoke;invoke.set_id(next_id());invoke.set_method_name(method_name);invoke.set_parameters(buffer.data(), buffer.size());invoke.set_protocol_version(1);return invoke; }void Channel::send_message(const std::uint8_t &type,google::protobuf::MessageLite const &message) {const size_t size = message.ByteSize();const unsigned char header_bytes[header_size] = {static_cast<unsigned char>((size >> 8) & 0xff),static_cast<unsigned char>((size >> 0) & 0xff), type,};std::vector<std::uint8_t> send_buffer(sizeof(header_bytes) + size);std::copy(header_bytes, header_bytes + sizeof(header_bytes),send_buffer.begin());message.SerializeToArray(send_buffer.data() + sizeof(header_bytes), size);try {std::lock_guard<std::mutex> lock(write_mutex_);sender_->send(reinterpret_cast<const char *>(send_buffer.data()),send_buffer.size());} catch (std::runtime_error const &) {notify_disconnected();throw;} }void Channel::notify_disconnected() { pending_calls_->force_completion(); }std::uint32_t Channel::next_id() {static std::uint32_t next_message_id = 0;return next_message_id++; } } // namespace rpc } // namespace anboxcall_method() 用于發起 RPC 調用,這個函數將 RPC 調用描述及完成回調保存進 pending_calls_ 中,隨后發送消息。anbox::rpc::Channel 主要在操作 Protobuf 消息的序列化,此處不再贅述。
可以再看一下 RPC 調用發起端收到響應消息時的處理,主要是 anbox::rpc::MessageProcessor 的下面這一段(位于 anbox/src/anbox/rpc/message_processor.cpp):
} else if (message_type == MessageType::response) {auto result = make_protobuf_object<protobuf::rpc::Result>();result->ParseFromArray(buffer_.data() + header_size, message_size);if (result->has_id()) {pending_calls_->populate_message_for_result(*result,[&](google::protobuf::MessageLite *result_message) {result_message->ParseFromString(result->response());});pending_calls_->complete_response(*result);}for (int n = 0; n < result->events_size(); n++)process_event_sequence(result->events(n));}buffer_.erase(buffer_.begin(),buffer_.begin() + header_size + message_size);}這段代碼將響應消息塞給 pending_calls_ 中保存的對應的 Invocation,并調用完成回調。
Anbox 中會話管理器與容器管理器通過兩個 RPC 調用進行通信,在調用發起端的整個處理過程如下圖:
Done。
總結
以上是生活随笔為你收集整理的Anbox 实现分析 3:会话管理器与容器管理器的通信的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Android Studio 构建
- 下一篇: Ubuntu 16.04 安装 ROS