Convert little-endian to big-endian and 32-bit to 64-bit

User Generated

Dhvevab26

Computer Science

Description

I have this question as homework and I can't seem to figure it out. Hope someone can help me out.

The next 20 bytes represent the content of the bytes of a binary file written by a 32-bits big-endian computer (the left colums represents the byte-indices, the other columns represent the content of the bytes; letters are ascii-characters, numbers are numerical byte-values). The bytes at addresses 16-19 should by interpreted as a 32-bits value.

0000     h e l l o 32 w o

0008     r l d 0 0 0  0 0

0016     0 0 4 3


(The 32 at address 5 is one value).

Someone writes a program that reads the text stored in the first 16 bytes and the binary value stored in the next 4 bytes.

Questions:
-What's the numerical value (decimal or hexadecimal value is OK) stored in the bytes on addresses 16-19 (so 1 numerical value, not 4 values)

-Edit the file so that a 64-bits little endian computer will read the same text and the same value. Because the little endian computer has a 64-bits architecture, the value should not only be converted to a little-endian architecture, but also to a 64-bits architecture.

Hope someone can help me out! Thanks very much in advance!

User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

OK. I'm sure that you do. Great. Etc.

Now, unless your bridge chips are transferring ONLY monotonic data, also
useless.

Just HOW does your bridge chip KNOW which fields in the data stream are
little-endian 16 and should NOT be swapped, big-endian 16 and DO need
to be swapped, and 8-bit ASCII text, should not be touched?

Oh, and how about 32-bit big-endian? That's NOT a case of just doing
adjacent 16-bit swaps. And 64-bit big endian integer?

Does your bridge handle these cases? I doubt it. If it does, PLEASE let
me know -- it would be a MARVEL!

No hardware can ever handle endian issues, UNLESS IT KNOWS the context of
the data -- what is big-endian, what is little endian, what is text data,
what is 16/32/64 bits, and so on.

Too often I've seen a hardware "designer" mumble something like "oh,
PCI is little endian, this is a big-endian processor, so I'll swap
those bytes".

Wrong.

There are some very nice chips out there that can be programmed to both
accept big/little endian addresses AND big/little descriptors in memory.

They work really nice.

Except when a hardware guy throws in a gratuitous swap.

> More importantly, we have many customers that are happy with the endian-
> conversion features of our bridges. They would surely disagree with your
> advice.

OK. Good for you. Since I'm not buying your product, then you should ignore
me. Right?

However, are you talking to the hardware guys that selected the chip,
or did you talk to the SOFTWARE guys programming the chip?


> > Byte swapping is inherently contextual. No hardware mechanism can ever
> > know if those "next 4 bytes" are a text string and should NOT be swapped,
> > or if those 4 bytes are "wrong endian" and should be swapped.
> >
> > The net effect (of hardware-provided swapping) is sometimes horrible.
>
> I agree, endian conversion is inherently contextual.
>
> However, in many applications, it usually fairly simple for software to arrange
> to transfer blocks of similar-kind data (bytes, words, etc), and then engage
> our DMA engines (with endian-conversion hardware) to deliver it without any
> loss in performance. Where it is not possible (or not practical) to ensure that a
> significant block has a uniform data type, software may be required to massage
> a *portion* of the data, and then let the bridge's swapping hardware do the rest.

Yes, altho simple uis misleading. Time consuming is more apprropriate. Since
EITHER WAY, software is going to have to swap something, somewhere, why
noy just drop the hardware swap. It just adds to confusion.

> For example, when a device chooses to DMA a block of string data, the
> device can be programmed to perform swapping on a byte boundary. When
> transferring a block of 16-bit shorts, the device can be programmed to swap
> on a 16-bit boundary, etc. Furthermore, by having endian-conversion for
> DMA data programmable through the DMA descriptor, the bridge does not
> have to be re-programmed before transferring each block of data. This coupled
> with DMA-chaining permits the transfer of several blocks of data with like
> intra-block data types, but unlike inter-block data types to be converted and
> delivered efficiently.

Well, monotonic data (like, all 32-bit x-edian values) is rare. But, just how
would a low level device drive KNOW what data is what?

A disk driver operates at the level of "read or write block N. Here's
the buffer address".

This *IS* the whole point. I perhaps should have expanded on it first.

I'm a software guide. HARDWARE PEOPLE, MOST OF THE TIME, DO NOT UNDERSTAND
WHAT IS GOOD FOR PROGRAMMING (and vice versa I suppose). HARDWARE BYTE
SWAPPING DOES MORE HARM THAN GOOD.

I personally think that homocide should be legal ( :-) ) if one more
hardware guys tells a software guy to "make it correct in software!

Kaboom - like, just what is DMA for in the first place?

Sure, a program CAN do all that extra, data massaging stuff, and burn lots
of CPU cycles and delay things, but, I rather think hardware should do
it right.

(As an allied example, consider DMA hardware that requires I/O to start on
some power of two, and have a length a power of two -- IDE controllers
for example. Do you know how LONG it takes to align a buffer when a 120K
byte IDE drive read starts on an odd byte address? A lot!)

>
> > In a system that swaps to/from PCI, but not to/from memory, I found
> > that bi-endian devices (i.e., the DEC2114x Ethernet chips) were made
> > "slow". The DEC2114x can specify either endian for both data and
> > descriptor lists, but since the hardware didn't consistently swap,
> > either the data or the descriptors always had to be swapped in
> > software.
> >
> > Ugh.
> >
> > Thus, I assert that hardware swapping is useless.
>
> I have to disagree with Phillip's assertion. It is unfortunate that he has had
> such bad experiences with devices that perform hardware endian conversion.
> I would agree that, for the contextual reasons mentioned above, endian
> conversion hardware is not the holy-grail for endian conversion -- it is simply
> an accelerator to perform simple endian conversion cases without losing any
> performance to those cases.

It's not. If you're JUST transferring monotonic data over your chips, like
a series of 32-bit big-endian integers, then you're fine. But, from your
description, you'd be seeing all kinds of data. Data that has all kinds
of formats, should that can be swapped, and some that can't. So why not leave
it alone?

It just makes it confusing.

> Certainly, not every application will be able to get away from doing some endian
> conversion in software. Nevertheless, even if you could only use the endian-
> conversion hardware on 50% of the data being moved across the bridge (you
> should be able to do much better), then you only take the hit in performance
> on the unaccelerated 50% (or less) of the data.Please, don't discount the
> performance gains that are to be had simply because
> the hardware cannot handle every possible case.
>
> Michael Tresidder
> V3 Semiconductor
> tresidd@vcubed.com

Just HOW is this "50%" to be determined?

The low-level driver has NO knowledge of the data structure, so it can't
do it.

The application can't always do it, and it shouldn't do it (for example,
you embed say, an Adobe postscript engine, which comes in object only
format), because the source code should be the same for a big and
little endian machine, so who would do it?



Also, if I read your post correctly, you seem to imply that your bridge
chip does only (??) 16-bit endian swapping. Is this right? "Cause that
would screw up 32-bit big endian numbers, right?


SUMMARY
-------
Hardware, such as bridge chips per above, should NOT attempt to be smart
and endian swaps. It only scrambles the data more.

Hardware is better when I/O can be done by descriptor chains.

Those descriptors are best when any byte alignment and any lenght can
be any value. Please, end the alignment & modulo length restrictions.



REFOCUS
-------
Of course, perhaps I'm all wrong. So tell me, if one is reading a little-endian
data IDE disk drive, with a DOS/W95 FAT-16 partition, on a MIPS big-endian
system, just HOW would a bridge chip with endian swapping be useful?

1 - 16-bit only swapping will screw up 32-bit values.
2 - 32-bit only swapping will screw up 16-bit values.
3 - Swapping at all screws up byte fields (strings etc.).

You haven't lived until you've programmed code on a big-endian machine
to "reaasemble" bytes strings that were 32-bit endian swapped BUT
read/written as 16-bit numbers. Confusion and complexity that would
have been avoided by NOT trying to be helpful.

Cited Source: http://www.pcisig.com/reflector/msg01714.html


Anonymous
I was stuck on this subject and a friend recommended Studypool. I'm so glad I checked it out!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags