Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I have an app that needs to read a PDF file from the file system and then write it out to the user. The PDF is 183KB and seems to work perfectly. When I use the code at the bottom the browser gets a file 224KB and I get a message from Acrobat Reader saying the file is damaged and cannot be repaired.

Here is my code (I've also tried using File.ReadAllBytes(), but I get the same thing):

using (FileStream fs = File.OpenRead(path))
    int length = (int)fs.Length;
    byte[] buffer;
    using (BinaryReader br = new BinaryReader(fs))
        buffer = br.ReadBytes(length);
    Response.Clear();
    Response.Buffer = true;
    Response.AddHeader("content-disposition", String.Format("attachment;filename={0}", Path.GetFileName(path)));
    Response.ContentType = "application/" + Path.GetExtension(path).Substring(1);
    Response.BinaryWrite(buffer);
                Are you seeing 224KB in the code sample you provided (fs.Length), or at the other end when you read this back in?
– Jon B
                May 11, 2009 at 15:38
                After I get the file back I checked the size, I was forgetting to put a Response.End() on there as pointed out by BarneyHDog.
– jhunter
                May 12, 2009 at 19:35
                This is not entirely related but deals with the filename you add to the header.  Not sure it's fixed now but Chrome would produce a "Duplicate Headers" warning for me when the file name contained a comma in it until I changed the header to the following: context.Response.AddHeader("Content-disposition", "attachment; filename=\"" + {0} + "\"");  This will surround the filename in quotes.  Here's a link to the reference: What is the "Duplicate Headers" Warning?
– fujiiface
                Sep 21, 2015 at 15:53
                This is kind of flushing the buffer. Very important, because every byte counts to make the stream valid.
– Roland
                Sep 5, 2014 at 12:56
                For future readers, careful, this brings down our production application pool every time it hits for some reason.
– Marie
                Aug 3, 2020 at 17:03

We've used this with a lot of success. WriteFile do to the download for you and a Flush / End at the end to send it all to the client.

            //Use these headers to display a saves as / download
            //Response.ContentType = "application/octet-stream";
            //Response.AddHeader("Content-Disposition", String.Format("attachment; filename={0}.pdf", Path.GetFileName(Path)));
            Response.ContentType = "application/pdf";
            Response.AddHeader("Content-Disposition", String.Format("inline; filename={0}.pdf", Path.GetFileName(Path)));
            Response.WriteFile(path);
            Response.Flush();
            Response.End();

Since you're sending the file directly from your filesystem with no intermediate processing, why not use Response.TransmitFile instead?

Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition",
    "attachment; filename=\"" + Path.GetFileName(path) + "\"");
Response.TransmitFile(path);
Response.End();

(I suspect that your problem is caused by a missing Response.End, meaning that you're sending the rest of your page's content appended to the PDF data.)

Just for future reference, as stated in this blog post: http://blogs.msdn.com/b/aspnetue/archive/2010/05/25/response-end-response-close-and-how-customer-feedback-helps-us-improve-msdn-documentation.aspx

It is not recommended to call Response.Close() or Response.End() - instead use CompleteRequest().

Your code would look somewhat like this:

    byte[] bytes = {};
    bytes = GetBytesFromDB();  // I use a similar way to get pdf data from my DB
    Response.Clear();
    Response.ClearHeaders();
    Response.Buffer = true;
    Response.Cache.SetCacheability(HttpCacheability.NoCache);
    Response.ContentType = "application/pdf";
    Response.AppendHeader("Content-Disposition", "attachment; filename=" + anhangTitel);
    Response.AppendHeader("Content-Length", bytes.Length.ToString());
    this.Context.ApplicationInstance.CompleteRequest();
                wow, thanks for sharing.  I kept getting OutputStream is not available when a custom TextWriter is used and by modifying my code similar to yours fixed part of my issue.
– JoshYates1980
                Mar 28, 2016 at 18:56

In my MVC application, I have enabled gzip compression for all responses. If you are reading this binary write from an ajax call with gzipped responses, you are getting the gzipped bytearray rather than original bytearray that you need to work with.

//c# controller is compressing the result after the response.binarywrite
[compress]
public ActionResult Print(int id)       
var byteArray=someService.BuildPdf(id);
return  return this.PDF(byteArray, "test.pdf");
//where PDF is a custom actionresult that eventually does this:
 public class PDFResult : ActionResult
    public override void ExecuteResult(ControllerContext context)
        //Set the HTTP header to excel for download
        HttpContext.Current.Response.Clear();
        //HttpContext.Current.Response.ContentType = "application/vnd.ms-excel";
        HttpContext.Current.Response.ContentType = "application/pdf";
        HttpContext.Current.Response.AddHeader("content-disposition", string.Concat("attachment; filename=", fileName));
        HttpContext.Current.Response.AddHeader("Content-Length", pdfBytes.Length.ToString());
        //Write the pdf file as a byte array to the page
        HttpContext.Current.Response.BinaryWrite(byteArray);
        HttpContext.Current.Response.End();
//javascript
function pdf(mySearchObject) {
    return $http({
    method: 'Post',
    url: '/api/print/',
    data: mySearchObject,
    responseType: 'arraybuffer',
    headers: {
    'Accept': 'application/pdf',
    }).then(function (response) {
var type = response.headers('Content-Type');
//if response.data is gzipped, this blob will be incorrect.  you have to uncompress it first.
var blob = new Blob([response.data], { type: type });
var fileName = response.headers('content-disposition').split('=').pop();
if (window.navigator.msSaveOrOpenBlob) { // for IE and Edge
    window.navigator.msSaveBlob(blob, fileName);
} else {
    var anchor = angular.element('<a/>');
    anchor.css({ display: 'none' }); // Make sure it's not visible
    angular.element(document.body).append(anchor); // Attach to document
    anchor.attr({
    href: URL.createObjectURL(blob),
    target: '_blank',
    download: fileName
    })[0].click();
    anchor.remove();

" var blob = new Blob([response.data], { type: type }); " This will give you that invalid/corrupt file that you are trying to open when you turn that byte array into a file in your javascript if you don't uncompress it first.

To fix this, you have a choice to either prevent gzipping this binary data so that you can properly turn it into the file that you are downloading, or you have to decompress that gzipped data in your javascript code before you turn it into a file.

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.