I'm using v2.1 on Windows 7/Server 2008. I need to start a number of REST tasks that can run in parallel, but the consumer of those tasks' output can only process the output of one task at a time. My first design attempt puts the the REST tasks in a vector. The consumer periodically checks task.is_done() on the tasks. It consumes the output of any that have completed and removes them from the vector, repeating until the vector is empty.
This works fine unless a task throws an exception. When this happens the exception exits consuming function and is handled further up the call stack. This causes any remaining tasks to be destructed, which I think is where my problem lies. Currently, the exception is caught and handled, but something (probably _REPORT_PPLTASK_UNOBSERVED_EXCEPTION()) is ending the program.
I believe I need to cancel the remaining tasks before I allow them to be destructed. I can give them a cancellation_token, but there may be six or more still running. Triggering and handling all those task_canceled exceptions would be messy.
This suggests that I'm not doing this right. What is the proper way to wait for and sequentially consume the output of a number of tasks?
This works fine unless a task throws an exception. When this happens the exception exits consuming function and is handled further up the call stack. This causes any remaining tasks to be destructed, which I think is where my problem lies. Currently, the exception is caught and handled, but something (probably _REPORT_PPLTASK_UNOBSERVED_EXCEPTION()) is ending the program.
I believe I need to cancel the remaining tasks before I allow them to be destructed. I can give them a cancellation_token, but there may be six or more still running. Triggering and handling all those task_canceled exceptions would be messy.
This suggests that I'm not doing this right. What is the proper way to wait for and sequentially consume the output of a number of tasks?