Topic
  • 4 replies
  • Latest Post - ‏2018-10-11T15:17:59Z by VladimirS_t
VladimirS_t
VladimirS_t
4 Posts

Pinned topic ftruncate64 incorrect file size on gpfs with SUCCESS

‏2018-10-03T19:33:53Z |

I could not reproduce this problem in the application with just trimmed 4 calls bellow, but it is consistent with my single threaded application on gpfs(4.2.x) filesystem.
Important sequence of calls is

const int64_t capacity = 8458240;    
ftruncate64(fd, current_size + capacity);
fstat(fd, &buf);
munmap(base, capacity);
base = mmap64(nullptr, capacity, PROT_READ|PROT_WRITE, MAP_SHARED, fd, current_size);

It works on all file systems available here but gpfs. Note: current_size is not rounded to capacity, but it is monotonically increasing. So program and above calls run for a while successfully, until truncation of 5724962816. In this case the file adjusted on gpfs to only 5716836352 (exactly 5716MB). Smaller sizes works well. It is visible on strace output:

ftruncate(6, 5724962816)                = 0
fstat(6, {st_mode=S_IFREG|0640, st_size=5716836352, ...}) = 0

Have any suggestions to where to look farther?

Thank you in advance.

 

Updated on 2018-10-03T19:42:54Z at 2018-10-03T19:42:54Z by VladimirS_t
  • VladimirS_t
    VladimirS_t
    4 Posts

    Re: ftruncate64 incorrect file size on gpfs with SUCCESS

    ‏2018-10-03T20:24:37Z  

    minimal example:

     

    int main(int argc, char* argv[])
      {
        auto m_fd = ::open(argv[1], O_RDWR|O_CREAT|O_TRUNC, (S_IRUSR | S_IWUSR) | (S_IRGRP | S_IWGRP));
        long m_capacity =      8458240L;
        long size = 60033856276L/8458240L;
        long increment = 8417280L;
        for(long i = 1; i<size; i++)
        {
          if( ::ftruncate64(m_fd,  m_capacity + increment*i) < 0 )
          {
            std::cerr<<"ftruncate failed "<< errno << " "<<::strerror(errno) <<endl;
            return 1;
          }

          int8_t* base = (int8_t*)::mmap(nullptr, m_capacity, PROT_READ | PROT_WRITE, MAP_SHARED, m_fd,  increment*i);
          if(base == MAP_FAILED)
          {
            std::cerr<<"mmap failed"<< errno << " "<<::strerror(errno) <<endl;
            return 1;
          }

          struct stat buf;
          ::fstat(m_fd, &buf);
          if(buf.st_size != m_capacity + increment*i)
            {
              std::cerr<< "size failed" << buf.st_size << " "<< (m_capacity + increment*i) <<  ", i " << i<<  endl;
              return 1;
            }

          for (int64_t i = 0; i < m_capacity; ++i)
            {
              volatile int8_t* addr = base + i;
              int8_t x              = *addr;
              *addr = x;
            }

          munmap(base, m_capacity);
        }
        return 0;
      }

     

  • NateFalk
    NateFalk
    1 Post

    Re: ftruncate64 incorrect file size on gpfs with SUCCESS

    ‏2018-10-04T15:11:50Z  

    Hello,

    This looks similar to a known problem that was fixed in a 4.2.3.7: APAR IJ03091

    http://www-01.ibm.com/support/docview.wss?uid=isg1IJ03091

     

    Error description

    • If a truncate() call is used to set the size of a file to
      something larger than the size the last write left it at,
      the filesystem may not correctly update the size of the
      file.
      
      Reported in:
      Spectrum Scale 4.2.3.2 on RHEL 7.3 x86_64
      
      Known Impact:
      If an application is expecting a file to be of a certain
      size, and a later stat() call shows the size of that file
      to be something different, that may cause the application
      to believe that the file is corrupt.
      
      Verification steps:
      One example to recreate this issue is to use a 4 MiB
      block
      size for the data pool in a filesystem, and use the
      following test program to demonstrate the problem:
      
      main()
      {
         system("dd if=/dev/zero bs=1 seek=22864437303 count=1
      of=/gpfs2/test");
         truncate("/gpfs2/test", 22868010952);
      }
      
      # ls -l /gpfs2/test
      -rw-r--r-- 1 root root 22867345408 Nov  9 12:00
      /gpfs2/test
      
      22864437303 bytes are written to the file, the size of
      the file is set to 22868010952 by truncate(), but stat()
      (ls) shows the file size as 22867345408.
      

    You could try upgrading to the latest Spectrum Scale 4.2.3 PTF and see if that fixes it for you.

     

    Thanks,

     

    Nate

     

  • VladimirS_t
    VladimirS_t
    4 Posts

    Re: ftruncate64 incorrect file size on gpfs with SUCCESS

    ‏2018-10-04T15:42:19Z  
    • NateFalk
    • ‏2018-10-04T15:11:50Z

    Hello,

    This looks similar to a known problem that was fixed in a 4.2.3.7: APAR IJ03091

    http://www-01.ibm.com/support/docview.wss?uid=isg1IJ03091

     

    Error description

    • <pre style="margin-top: 0px; margin-bottom: 0px; padding: 0px 0px 15px; border: 0px; vertical-align: baseline; font-size: 0.9375rem; font-family: ibm-plex-mono, Menlo, "DejaVu Sans Mono", "Bitstream Vera Sans Mono", Courier, monospace;">If a truncate() call is used to set the size of a file to something larger than the size the last write left it at, the filesystem may not correctly update the size of the file. Reported in: Spectrum Scale 4.2.3.2 on RHEL 7.3 x86_64 Known Impact: If an application is expecting a file to be of a certain size, and a later stat() call shows the size of that file to be something different, that may cause the application to believe that the file is corrupt. Verification steps: One example to recreate this issue is to use a 4 MiB block size for the data pool in a filesystem, and use the following test program to demonstrate the problem: main() { system("dd if=/dev/zero bs=1 seek=22864437303 count=1 of=/gpfs2/test"); truncate("/gpfs2/test", 22868010952); } # ls -l /gpfs2/test -rw-r--r-- 1 root root 22867345408 Nov 9 12:00 /gpfs2/test 22864437303 bytes are written to the file, the size of the file is set to 22868010952 by truncate(), but stat() (ls) shows the file size as 22867345408. </pre>

    You could try upgrading to the latest Spectrum Scale 4.2.3 PTF and see if that fixes it for you.

     

    Thanks,

     

    Nate

     

    will try, let you know on results.

    Note it will take a while to upgrade.

    Updated on 2018-10-05T15:01:33Z at 2018-10-05T15:01:33Z by VladimirS_t
  • VladimirS_t
    VladimirS_t
    4 Posts

    Re: ftruncate64 incorrect file size on gpfs with SUCCESS

    ‏2018-10-11T15:17:59Z  
    • NateFalk
    • ‏2018-10-04T15:11:50Z

    Hello,

    This looks similar to a known problem that was fixed in a 4.2.3.7: APAR IJ03091

    http://www-01.ibm.com/support/docview.wss?uid=isg1IJ03091

     

    Error description

    • <pre style="margin-top: 0px; margin-bottom: 0px; padding: 0px 0px 15px; border: 0px; vertical-align: baseline; font-size: 0.9375rem; font-family: ibm-plex-mono, Menlo, "DejaVu Sans Mono", "Bitstream Vera Sans Mono", Courier, monospace;">If a truncate() call is used to set the size of a file to something larger than the size the last write left it at, the filesystem may not correctly update the size of the file. Reported in: Spectrum Scale 4.2.3.2 on RHEL 7.3 x86_64 Known Impact: If an application is expecting a file to be of a certain size, and a later stat() call shows the size of that file to be something different, that may cause the application to believe that the file is corrupt. Verification steps: One example to recreate this issue is to use a 4 MiB block size for the data pool in a filesystem, and use the following test program to demonstrate the problem: main() { system("dd if=/dev/zero bs=1 seek=22864437303 count=1 of=/gpfs2/test"); truncate("/gpfs2/test", 22868010952); } # ls -l /gpfs2/test -rw-r--r-- 1 root root 22867345408 Nov 9 12:00 /gpfs2/test 22864437303 bytes are written to the file, the size of the file is set to 22868010952 by truncate(), but stat() (ls) shows the file size as 22867345408. </pre>

    You could try upgrading to the latest Spectrum Scale 4.2.3 PTF and see if that fixes it for you.

     

    Thanks,

     

    Nate

     

    We could not upgrade to 4.2.3.7:.

    However  upgrade to 5.0.1 resolves the issue.

     

    Thank you.