I just found the HTML5 File API the other day so I had to see what I could do with the APEX Listener's RESTful services. There's a bunch of blogs on what can be done such as on HTML5rocks  .

  The end result is that the new File api let's javascript get details of the file and slice it up into parts. Then I made a pretty simple REST end point to receive the chunks and put them back together again.

The actual sending part of the javascript is here
function sendChunk(chunkNumber){
var reader = new FileReader();
var start = chunkSize * (chunkNumber-1);
var end = start + chunkSize -1;
// create the slice of the file
var fileContent = selectedFile.slice(start, end);
// grab the length
var length = fileContent.size;

// read the slice of the file
reader.readAsArrayBuffer(fileContent);

$.ajax({
url: uri,
type: "POST",
data: fileContent,
processData: false,
beforeSend: function(xhr) {
// pass in the chunk size,offset,name
// as headers
xhr.setRequestHeader('x-chunknumber', chunkNumber);
xhr.setRequestHeader('x-filename', selectedFile.name);
xhr.setRequestHeader('x-offset', start );
xhr.setRequestHeader('x-chunksize', length );
xhr.setRequestHeader('x-content-type', selectedFile.type );

},
success: function (data, status) {
console.log(data);
console.log(status);
bytesUploaded += length;
// set the percent complete
var percentComplete = ((bytesUploaded / selectedFile.size) * 100).toFixed(2);
$("#fileUploadProgress").text(percentComplete + " %");

// make a link to the REST that can deliver the file
$("#downloadLink").html("New File");

// if there's more chunks send them over
if ( chunkNumber < chunks ) {
sendChunk(chunkNumber+1);
}
},
error: function(xhr, desc, err) {
console.log(desc);
console.log(err);
}
});

}


The next step is to make the HTTP Headers into bind variable so the plsql block will be able to use them.


declare
p_b blob;
p_body blob;
p_offset number;
p_filename varchar2(4000);
p_raw long raw;
p_chunksize varchar2(200);
p_status varchar2(200);
begin
-- pull the binds into locals
p_offset := :OFFSET + 1;
p_body := :body;
p_filename := :filename;
p_chunksize := :chunksize;

-- NOT FOR PRODUCTION OR REAL APPS
-- If there is a file already with this name nuke it since this is chunk number one.
if ( :chunkNumber = 1 ) then
p_status := 'DELETING';
delete from chunked_upload
where filename = p_filename;
end if;

-- grab the blob storing the first chunks
select blob_data
into p_b
from chunked_upload
where filename = p_filename
for update of blob_data;

p_status :=' WRITING';

-- append it
dbms_lob.append(p_b, p_body);

commit;
exception
-- if no blob found above do the first insert
when no_data_found then
p_status :=' INSERTING';
insert into CHUNKED_UPLOAD(filename,blob_data,offset,content_type)
values ( p_filename,p_body,p_offset,:contenttype);
commit;
when others then
-- when something blows out print the error message to the client
htp.p(p_status);
htp.p(SQLERRM);
end;



A very simple html page for testing it all out.



Here's a quick video of how it all works.




Here's the complete Javascript/html for this sample.


JS Bin