Hi, I would like to ask about data migration. I want to migrate data from my cosmos DB MongoDB 3.2 to cosmos DB MongoDB 3.6 and decided to use ADF. Two weeks ago everything worked well, but now I have an error 2200. I double-checked connection strings. When I test connections, everything is green. I am also able to preview the data. But when I try to debug my pipeline from I've got the error. The same situation when running a pipeline with a trigger from powershell Invoke-AzDataFactoryV2Pipeline cmdlet. Below error from the console.

Operation on target Copy data between cosmos db mongo db collections failed: Failure happened on 'Source' side. ErrorCode=MongoDbConnectionTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=>Connection to MongoDB server is timeout.,Source=Microsoft.DataTransfer.Runtime.MongoDbV2Connector,''Type=System.TimeoutException,Message=A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : "2", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 2, EndPoint : "Unspecified/username.documents.azure.com:10255" }", EndPoint: "Unspecified/username2.documents.azure.com:10255", State: "Disconnected", Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.Net.Sockets.SocketException: This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server

Hello @chodkows ,

Thanks for question and welcome to Microsoft Q&A Platform.

Please change the host name in connection string from ‘ .mongo.cosmos.azure.com’
to ‘
.documents.azure.com' and then retry it.

Please to let us know how it goes .

Thanks
Himanshu

Hello ,
Just wanted to follow up and check if the below suggestion helped you resolve the issue . Also if case if you have a better work around or resolution please do share that with the community as it will help other community members .

Thanks & stay safe

Himanshu

I had the same issue and it was fixed by switching to "Azure Key Vault" option when my Cosmos Db connection string is stored as a "Secret".

I also had to add my Azure Data Factory permissions to read the secrets from the vault.

I hope this will help.

Hi @chodkows ,

Just heard back from product team confirming that a fix for this issue has been deployed recently. Could you confirm if your issue was resolved?

Looking forward to your confirmation.
Thank you

Hi @chodkows ,

Sorry you are experiencing this.

After having conversation with internal team, there has been a bug identified by ADF engineering team related to this. To workaround this issue, could you please manually check the linked service JSON payload, to see if there is a property of "tls=true" inside the connection string.

Change the "tls=true" to "ssl=true" in connection string and rerun failed pipelines. The fix for this issue is currently under deployment, until the fix go live, unfortunately current linked services need this manual effort to correct it.

In case if you have already provided "ssl=true" while creating the linked service, after test connection and preview data, before running the pipeline, please open the linked service code to double check, this "ssl=true" property maybe auto-changed to "tls=true", if so, please change it back to "ssl=true"

Please let us know how it goes. In case if this workaround doesn't resolve your issue, please share details as requested by Himanshu so that we can escalate this to internal team for deeper analysis.

Thank you for your patience.