Abend B37-04 -- RESOLVED!

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
IT WORKS!  Completion code 00!

What I did:

1. Changed by Herc config to NOT specify a pre-mounted tape on drive 591:

     0591  3590  *

This is simply to prevent "MVS" from unloading the tape when I respond to its first "which drive is this unlabeled tape on?" query, which I find annoying.  Now it just issues a mount message on 591 for the volser I specified on my DD statement (upon which I then so my Hercules 'devinit' and the job immediately takes off).

2. As suggested by many, I changed the BLKSIZE on my output SYSUT2 DCB to 27648 (but left the BLKSIZE on my SYSUT1 input tape DCB set to 30720).

3. Changed the DISP parameter on my SYSUT2 output DD to NEW,KEEP,DELETE (rather than NEW,KEEP,KEEP), so I wouldn't have to keep manually deleting my dataset should something go wrong (which can be a PITA to have to keep doing each time).

4. Changed the SPACE parameter on my SYSUT2 output DD to CYL,4368,RLSE.

Now it runs to successful completion! (Yea!)

Here's what 3.4 "info" now shows:


    Data Set Name . . . . : IBMUSER.FISH.TESTFILE.#001

    General Data                           Current Allocation
     Management class . . : **None**        Allocated cylinders : 4,362
     Storage class  . . . : **None**        Allocated extents . : 1
      Volume serial . . . : FISH01
      Device type . . . . : 3390
     Data class . . . . . : **None**       Current Utilization
      Organization  . . . : PS              Used cylinders  . . : 4,362
      Record format . . . : FB              Used extents  . . . : 1
      Record length . . . : 1024
      Block size  . . . . : 27648
      1st extent cylinders: 4362
      Secondary cylinders : 0
      Data set name type  :                 SMS Compressible  :   NO

      Creation date . . . : 2017/01/26      Referenced date . . : 2017/01/26
      Expiration date . . : ***None***


Which is close enough to the 4369 cylinder maximum for my purposes!

Thanks to EVERYONE who took the time and patience to try and help this extreme "MVS" noob.

I *really* appreciate it!  :)


Now to try and load "IBMUSER.FISH.TESTFILE.#002" with some more data to see how Hercules reacts.  ;-)

Thanks again everyone.

(This is fun!)

--
"Fish" (David B. Trout)
Software Development Laboratories
http://www.softdevlabs.com
mail: [hidden email]



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
On Thu, 26 Jan 2017, at 08:48, ''Fish' (David B. Trout)'
[hidden email] [H390-MVS] wrote:
> IT WORKS!  Completion code 00!

Hooray!


> 2. As suggested by many, I changed the BLKSIZE on my output SYSUT2 DCB to
> 27648 (but left the BLKSIZE on my SYSUT1 input tape DCB set to 30720).

You definitely wouldn't have wanted to change the SYSUT1 DCB parms, as
those
have to match what was used when the tape was created, so that the
system
reads all of the data from the tape.  BLKSIZE is used by the IO methods
to allocate
the right amount of space in IO buffers.


> Here's what 3.4 "info" now shows:
>
>
>     Data Set Name . . . . : IBMUSER.FISH.TESTFILE.#001
>
>     General Data                           Current Allocation
>      Management class . . : **None**        Allocated cylinders : 4,362

Which is interesting...  one of the stumbling blocks earlier was that
you were convinced that
4350 CYL was more than enough space for your output file.  And yet, even
with a more efficient
output blksize, you still ended up using more cylinders than you then
expected.

Do you now understand why this number of cylinders was required to hold
the output data?


--
Jeremy Nicoll - my opinions are my own.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
Jeremy Nicoll wrote:

[...]
> Which is interesting... one of the stumbling blocks
> earlier was that you were convinced that 4350 CYL was
> more than enough space for your output file. And yet,
> even with a more efficient output blksize, you still
> ended up using more cylinders than you then expected.

Yeah, I was too fast and loose with my numbers.


> Do you now understand why this number of cylinders was
> required to hold the output data?

Yes.

I was also much more careful this time around and this time my numbers were spot on.

My ultimate goal has now been accomplished.  I now know what happens when the maximum shadow file size is reached: Hercules simply floods the console with message HHC00304E:

    HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
    HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
    HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
    [...]


but otherwise does NOT throw any type of I/O error. (which IMHO is just plain WRONG!)

And as a result of that bug, even though all of my data was obviously NOT written to dasd, my job nevertheless eventually ran to successful(!) completion.

I've still got a few more tests to do, but so far this does NOT bode well for Hercules.  :(

--
"Fish" (David B. Trout)
Software Development Laboratories
http://www.softdevlabs.com
mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list


On 1/26/2017 11:55 AM, ''Fish' (David B. Trout)' [hidden email]
[H390-MVS] wrote:

This may need to be moved to the main group (it's no longer an MVS issue
per-se)...

> My ultimate goal has now been accomplished. I now know what happens
> when the maximum shadow file size is reached: Hercules simply floods
> the console with message HHC00304E:
>      HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>      HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>      HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>      [...]
>
>
> but otherwise does NOT throw any type of I/O error. (which IMHO is just plain WRONG!)
Did you try turning CCKD lazy write off ? By default CCKD lazy write is
on - meaning the actual compression and write to the backing file is
done in the background, well after the I/O causing the error has
finished. I think this is documented as a caveat of CCKD lazy write
processing.
> And as a result of that bug, even though all of my data was obviously NOT written to dasd, my job nevertheless eventually ran to successful(!) completion.
I'm afraid you wouldn't be able to read back the written dataset though !
>
> I've still got a few more tests to do, but so far this does NOT bode well for Hercules.  :(
>
The answer would be to implement 64 bit CCKD (that is CCKD with internal
descriptors using 64 bit offsets). There are 2 possible approaches :

- Possibility to create "CCKD64" dasds (via dasdinit) and a possibility
to convert 32 bit CCKD to "CCKD64" (I just "coined" CCKD64 - any other
name is fine) - I personally prefer this route.
- On the fly conversion (that was Greg was thinking about - but I now
think it's not a good idea, because it could lead to issues with people
trying to use old version of hercules - which obviously wouldn't be able
to use those CCKD file that have been converted covertly.... Although
this could be averted by using some tricks. (I think Greg had a trick
that wouldn't be an issue for CCKD dasds that are below the 4G limit).

With 64 bit offsets, we should be fine for the next 10-20 years I
guess(that's a file size limit of 16EB).

--Ivan


smime.p7s (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
---In [hidden email], <ivan@...> wrote :

>> but otherwise does NOT throw any
>> type of I/O error. (which IMHO is just
>> plain WRONG!)

> Did you try turning CCKD lazy write off ?
> By default CCKD lazy write is
> on - meaning the actual compression
> and write to the backing file is
> done in the background, well after
> the I/O causing the error has
> finished.

Would it be possible to switch off lazy
write automatically when the file size
reaches 3.9 GiB?

BFN. Paul.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
In reply to this post by Hercules390 - Mvs mailing list
As a short term measure could Hercules do the equivalent of a sf+ command when the limit is reached? I have run into this using dasd load to build a 3390-27 pack using zlib compression.

Laddie



Sent from whatever device I am using.

> On Jan 26, 2017, at 4:20 AM, Ivan Warren [hidden email] [H390-MVS] <[hidden email]> wrote:
>
>
>
> On 1/26/2017 11:55 AM, ''Fish' (David B. Trout)' [hidden email] [H390-MVS] wrote:
>
> This may need to be moved to the main group (it's no longer an MVS issue per-se)...
>
>> My ultimate goal has now been accomplished. I now know what happens when the maximum shadow file size is reached: Hercules simply floods the console with message HHC00304E:
>>     HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>>     HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>>     HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>>     [...]
>>
>>
>> but otherwise does NOT throw any type of I/O error. (which IMHO is just plain WRONG!)
> Did you try turning CCKD lazy write off ? By default CCKD lazy write is on - meaning the actual compression and write to the backing file is done in the background, well after the I/O causing the error has finished. I think this is documented as a caveat of CCKD lazy write processing.
>> And as a result of that bug, even though all of my data was obviously NOT written to dasd, my job nevertheless eventually ran to successful(!) completion.
> I'm afraid you wouldn't be able to read back the written dataset though !
>>
>> I've still got a few more tests to do, but so far this does NOT bode well for Hercules.  :(
>>
> The answer would be to implement 64 bit CCKD (that is CCKD with internal descriptors using 64 bit offsets). There are 2 possible approaches :
>
> - Possibility to create "CCKD64" dasds (via dasdinit) and a possibility to convert 32 bit CCKD to "CCKD64" (I just "coined" CCKD64 - any other name is fine) - I personally prefer this route.
> - On the fly conversion (that was Greg was thinking about - but I now think it's not a good idea, because it could lead to issues with people trying to use old version of hercules - which obviously wouldn't be able to use those CCKD file that have been converted covertly.... Although this could be averted by using some tricks. (I think Greg had a trick that wouldn't be an issue for CCKD dasds that are below the 4G limit).
>
> With 64 bit offsets, we should be fine for the next 10-20 years I guess(that's a file size limit of 16EB).
>
> --Ivan
>

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
 - - - In [hidden email], <laddiehanus@...> wrote:
> As a short term measure could Hercules do the equivalent of a
>sf+ command when the limit is reached? I have run into this using
>dasd load to build a 3390-27 pack using zlib compression.
> Laddie
>  Sent from whatever device I am using.
- - - old notes snipped - - -

An sf+ command at 99.999% full sounds good?
Better than the lazy write but anytime that there is any
write issue, perhaps lazy write should be disabled for
any additional writes?
That is to say, use both methods.

Lazy write off may show processing but give some
data integrity that wasn't there before.
Late so not perfect but a huge improvement.

What are the sf+ issues?

Could the limit of shadow files for that disk
image be exceeded?  What then?

Will the Hercules control file need to specify the
new shadow file or will the same template be used?
I don't use shadow files but assUme that I'm bringing
up a non-issue?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
In reply to this post by Hercules390 - Mvs mailing list
- - - In [hidden email], <ivan@...> wrote:
> On 1/26/2017 11:55 AM, ''Fish' (David B. Trout)' david.b.trout@...
>mailto:david.b.trout@... [H390-MVS] wrote:
 > This may need to be moved to the main group (it's no longer an
>MVS issue per-se)...

The thread may be ready to finish?
One post to Hercules-390 to inform of the issue might be nice?
Or at least github sites?

>> My ultimate goal has now been accomplished. I now know what happens
>>when the maximum shadow file size is reached: Hercules simply floods
>>the console with message HHC00304E:
>> HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>> HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>> HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space error, size exceeds 4096M
>> [...]
>> but otherwise does NOT throw any type of I/O error. (which IMHO is just plain WRONG!)
> Did you try turning CCKD lazy write off ? By default CCKD lazy write is
>on - meaning the actual compression and write to the backing file is
>done in the background, well after the I/O causing the error has
>finished. I think this is documented as a caveat of CCKD lazy write
>processing.

Lazy write should be turned off before the error but that might be
difficult?  Turning it off late, as in on or after, the first error is better
than what we have now.

>> And as a result of that bug, even though all of my data was obviously NOT
>>written to dasd, my job nevertheless eventually ran to successful(!) completion.
>I'm afraid you wouldn't be able to read back the written dataset though !
>> I've still got a few more tests to do, but so far this does NOT bode well for Hercules. :(
> The answer would be to implement 64 bit CCKD (that is CCKD with internal
>descriptors using 64 bit offsets). There are 2 possible approaches :
> - Possibility to create "CCKD64" dasds (via dasdinit) and a possibility
>to convert 32 bit CCKD to "CCKD64" (I just "coined" CCKD64 - any other
>name is fine) - I personally prefer this route.

Sounds reasonable.

> - On the fly conversion (that was Greg was thinking about - but I now
>think it's not a good idea, because it could lead to issues with people
>trying to use old version of hercules - which obviously wouldn't be able
>to use those CCKD file that have been converted covertly.... Although
>this could be averted by using some tricks. (I think Greg had a trick
>that wouldn't be an issue for CCKD dasds that are below the 4G limit).

This would work because there is a byte currently set to zero.
Only a couple or three bits would be needed to indicate a rounding
factor to calculate the location of the track.
0 = track is on a byte boundary.
1 could be on a sixteen byte boundary.
2 could be on a 256 byte boundary.
3 could be on a 4096 byte boundary.

Problems:

1. 4GB limit would only go up to 16TB limit, not 16exobyte.

2. Padding track images would waste P.C. disk space.

3. Mixing rounding for different tracks would be strange.

4. There was an old use for the available byte but since
only a couple of bits are needed, conflicts could
be avoided.

> With 64 bit offsets, we should be fine for the next
>10-20 years I guess(that's a file size limit of 16EB).
> --Ivan

What does Moore's law say?
I suspect that it should be good for about 50 years?
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
In reply to this post by Hercules390 - Mvs mailing list
Ivan Warren wrote:

> This may need to be moved to the main group (it's no longer
> an MVS issue per-se)...

Agreed.  I'll be reporting it once I do a few more tests.


[...]
> >      HHC00304E 0:0A99 CCKD file[1] <shadow-filename>: get space
> >      error, size exceeds 4096M
> >      [...]
> >
> > but otherwise does NOT throw any type of I/O error. (which IMHO
> > is just plain WRONG!)
>
> Did you try turning CCKD lazy write off?

No.  Never heard of that.  Greg's CCKD web page (https://fish-git.github.io/html/cckddasd.html#cckdcommand) makes no mention of the word "lazy".  "Lazy" doesn't appear anywhere in Hercules code either.


[...]
> > And as a result of that bug, even though all of my data was
> > obviously NOT written to dasd, my job nevertheless eventually
> > ran to successful(!) completion.
>
> I'm afraid you wouldn't be able to read back the written dataset
> though!

That's the first test on today's agenda.  :)


> > I've still got a few more tests to do, but so far this does NOT
> > bode well for Hercules.  :(
>
> The answer would be to implement 64 bit CCKD (that is CCKD with
> internal descriptors using 64 bit offsets).

That's one of my long term goals, yes: to enhance Greg's CCKD logic to support 64-bit L1/L2 offsets/pointers, etc.  I'll probably make it an entirely new CCKD format altogether rather than a modification to the existing format.


> There are 2 possible approaches :
>
> - Possibility to create "CCKD64" dasds (via dasdinit) and
>   a possibility to convert 32 bit CCKD to "CCKD64" (I just
>   "coined" CCKD64 - any other name is fine) - I personally
>    prefer this route.

As do I.


> - On the fly conversion (that was Greg was thinking about -
>   but I now think it's not a good idea, because it could lead
>   to issues with people trying to use old version of hercules
>   - which obviously wouldn't be able to use those CCKD file
>   that have been converted covertly.... Although this could
>   be averted by using some tricks. (I think Greg had a trick
>   that wouldn't be an issue for CCKD dasds that are below the
>   4G limit).

He said something about a "shift" value that defined the number of bits an offset(?) would be shifted (with the default being zero for the existing format and > 0 for the new larger format.

Or something like that.  I can't remember and I'm too lazy to go back and check.

But like you I believe a brand new non-backward-compatible format is the way to go.

I'm not big on implementing non-backward-compatible solutions but sometimes you have no choice.

--
"Fish" (David B. Trout)
Software Development Laboratories
http://www.softdevlabs.com
mail: [hidden email]




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: Abend B37-04 -- RESOLVED!

Hercules390 - Mvs mailing list
- - - In [hidden email], <david.b.trout@...> wrote:
- - - beginning snipped - - -
> He said something about a "shift" value that defined the number
>of bits an offset(?) would be shifted (with the default being zero for
>the existing format and > 0 for the new larger format.
> Or something like that. I can't remember and I'm too lazy to go back and check.
- - - ending snipped - - -

The original CCKD code allocated the track images on an eight byte boundary.
That meant the up to 7 bytes could be wasted for a track image.
The tracks remained being addressed on a byte boundary so the
limit of 4GB remained.  If the P.C. file system didn't support 4GB, then
even that wasn't available.

Later, CCKD started allocating on a byte boundary but of course still
accepts tracks allocated on an eight byte boundary.

The new plan was to vary the boundary to allow more addressable
disk space.  If a 256 boundary instead of a byte boundary, the
4GB addressable bytes would be 4G of 256 pieces so a disk
image could go from 4GB to 1TB.

Different shift amounts could be used to minimize wasted bytes at
the end of most tracks but having mixed disk track boundaries on
one disk image doesn't make much sense to me.  It would save
a few bytes but in my opinion, not worth the effort.
Imagine:
All tracks in the disk image below 4GB could be on a byte boundary.
Tracks between location 4GB and x, would be on a different boundary. ( 2 byte? )
Tracks between location x and y would be on a different boundary. ( 4 byte? )
Tracks between location y and z, still another. ( 8 byte? )
I don't believe that the plan was to support that many boundaries.
? Perhaps ? 8 byte ? 256 byte ? 4096 byte ? 65536 byte ? etc. ?

The CCKD64 suggestion sounds simpler and reasonable.
No shift value.
No wasted disk space except for the larger pointer size that
would only be for disks created to be able to be over 4GB .
Loading...