Last night, one of the copy activities in our Data Factory pipeline failed with the following message:

Error code 400:

ErrorCode=InvalidTemplate, ErrorMessage=The function 'json' parameter is not valid. The provided value 'The page was not displayed because the request entity is too large.' cannot be parsed: 'Unexpected character encountered while parsing value: T. Path '', line 0, position 0

The same pipeline was running fine the night before, and also a rerun ran fine, with no deployment inbetween.
What could have been the problem?

Pipeline run id 7649d7d5-0685-4d9a-81b3-d380c43023eb
Activity run id c74ee297-3156-41a9-93a2-0d4a78d77928

Hi @ThomasSchouten-7598 ,
Welcome to Microsoft Q&A platform and thanks for posting your question here.
It seems you are trying to make use of copy activity in ADF to perform data load from source to sink. The pipeline was working fine until this day, however, it started failing with error : "The function 'json' parameter is not valid. The provided value 'The page was not displayed because the request entity is too large.' cannot be parsed"

As you mentioned , no changes has been made to the pipeline, I suspect the source data is the cause of the error. Could you please share the source and sink datastore type and configuration details of the pipeline to help you better .

If the source is REST API, it might have been failing because of data arriving in multiple pages and pagination not getting handled properly.

Please revert back with required details to provide better help to solve your query. Thanks!

Hi @AnnuKumari-MSFT ,

No, I verified that the source data was not the problem, by rerunning exactly the same copy activity on the same source data.
The activity copies Parquet file to a Synapse Analytics workspace. This is the full definition:

210119-myactivityjson.txt

We too have started getting this error in ADF over the last week or so. It is a brand new error, and occurs only intermittently. Subsequent runs will be fine.
What has changed in ADF?

ErrorCode=InvalidTemplate, ErrorMessage=The function 'json' parameter is not valid. The provided value 'The page was not displayed because the request entity is too large.' cannot be parsed: 'Unexpected character encountered while parsing value: T. Path '', line 0, position 0

First ever occurence was 2022-06-08 19:58:36 GMT

Pipeline run ID 8284b3ac-d0b9-4c10-bade-899e837bbf07

Same experience as comment by Kev-6614, we have encountered this error on two different pipelines in the last week, but have never seen it before. No deployments have occurred to explain the change in behaviour. Both errors are doing quite simple things, one was doing an iteration on a switch, the other setting a variable. Both were querying metadata that hasn't changed from previous runs

In both cases restarts have cured the issue

Fail Date Time: 2022-06-14 03:49:17AM
Pipeline RunId 8746ad14-6c51-4586-a5e9-85867d699f50
Activity ID ba6abb33-ad8a-4ba2-ab23-6167b664ba9a

Fail Date Time: 2022-06-09 02:07:28AM
Pipeline RunId f4e75c4b-8271-45fc-a166-473df4bc810a
ActivityID 7ee17e6c-2a25-4920-8e1a-5be83d588666

We are seeing the same on our side. In our case it was a combination of execute ssis package & script task being run. failure was on the execute ssis package task.

i have had 2 failures now with this error; re-running from failed activity re-produces the error whilst running the pipeline in full does not.

ErrorCode=InvalidTemplate, ErrorMessage=The function 'json' parameter is not valid. The provided value 'The page was not displayed because the request entity is too large.' cannot be parsed: 'Unexpected character encountered while parsing value: T. Path '', line 0, position 0

Same like Kev-6614 for me. Randomly occurs since beginning of June for the Get Metadata activity and since five days for the Copy activity. Pipeline and Dataset Sources did not change since weeks. And as stated above it is not re-producable. Triggering the pipeline again either fails randomly on another file or succeeds without any issues.

Update from Microsoft support:

We got to know that in past few days globally, we received the similar cases with the same error message and raised this to our product team. They worked on it on priority and confirmed that there was issue at the backend which is now mitigated.

This answers the question, in the sense that the issue is now fixed.