admin管理员组文章数量:1023242
Im having an infinite loop when model_paths are not correct in model_config_list for tf-serving. This seems to be a known behaviour for tf-serving and it also seems to be two different ways of avoiding it.
Both of them are related to two different options that are setup in the server_core.c. First one is servable_versions_always_present which is present in my current version (2.14.1) and the other one is should_retry_model_load which is in 2.18. The problem is that neither of them are exposed to be setup as a flag in the call (as you can see in the main) so you can't just do tensorflow_model_server --model_config_file=/tf/models/models.config --servable_versions_always_present=true, for example.
The thing is that, even is not documented anywhere I could find, I saw a couple of comments in the github that recommend to setup this options through the models.config file but every try I have done has failed.
i.e:
model_config_list {
config {
name: "my_model"
base_path: "/models/my_model/model"
model_platform: "tensorflow"
}
servable_versions_always_present: true
or
model_config_list {
config {
name: "my_model"
base_path: "/models/my_model/model"
model_platform: "tensorflow"
servable_versions_always_present: true
}
So the question is: is there anyway to setup any of those two parameters from the model.config or any other way without adding them in the main as a flag or modifying them directly in the code?
Thanks in advance.
Im having an infinite loop when model_paths are not correct in model_config_list for tf-serving. This seems to be a known behaviour for tf-serving and it also seems to be two different ways of avoiding it.
Both of them are related to two different options that are setup in the server_core.c. First one is servable_versions_always_present which is present in my current version (2.14.1) and the other one is should_retry_model_load which is in 2.18. The problem is that neither of them are exposed to be setup as a flag in the call (as you can see in the main) so you can't just do tensorflow_model_server --model_config_file=/tf/models/models.config --servable_versions_always_present=true, for example.
The thing is that, even is not documented anywhere I could find, I saw a couple of comments in the github that recommend to setup this options through the models.config file but every try I have done has failed.
i.e:
model_config_list {
config {
name: "my_model"
base_path: "/models/my_model/model"
model_platform: "tensorflow"
}
servable_versions_always_present: true
or
model_config_list {
config {
name: "my_model"
base_path: "/models/my_model/model"
model_platform: "tensorflow"
servable_versions_always_present: true
}
So the question is: is there anyway to setup any of those two parameters from the model.config or any other way without adding them in the main as a flag or modifying them directly in the code?
Thanks in advance.
本文标签: tensorflowCan39t setup specific parameters of servercore through modelconfiglistStack Overflow
版权声明:本文标题:tensorflow - Can't setup specific parameters of server_core through model_config_list - Stack Overflow 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://it.en369.cn/questions/1745576450a2157047.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论