• 10 Posts
  • 307 Comments
Joined 3 years ago
cake
Cake day: July 6th, 2023

help-circle
  • It’s laughable before you even get to the code. You know, doing “eval bad” when all the build scripts are written in bash 🤣

    There is also no protection for VCS sources (assuming no revision hash is specified) in makepkg (no “locking” with content hash stored). So, if an AUR package maintainer is malicious, they can push whatever they want from the source side. They actually can do that in any case obviously. But with VCS sources, they can do it at any moment transparently. In other words, your primary concern should be knowing the sources are coming from a trustable upstream (and hoping no xz-like fiasco is taking place). Checking if the PLGBUILD/install files are not fishy is the easier part (and should be done by a human). And if you’re using AUR packages to the extent where this is somehow a daunting time-consuming task, then there is something wrong with you in any case.

    Edit: That is not to say the author of the tool wouldn’t just fit right in with security theater crowd. Hell, some of them even created whole businesses using not too dissimilar theater components.

    @kadu@scribe.disroot.org


  • So try_from(&**p) is not a code smell/poor form in Rust?

    No. It’s how you (explicitly) go from ref to deref.

    Here:

    • p is &PathBuf
    • *p is PathBuf
    • **p is Path (Deref)
    • And &**p is &Path.

    Since what you started with is a reference to a non-Copy value, you can’t do anything that would use/move *p or **p. Furthermore, Path is an unsized type (just like str and [T]), so you need to reference it (or Box it) in any case.

    Another way to do this is:

    let p: &Path = p.as_ref();
    

    Some APIs use AsRef in signatures to allow passing references of different types directly (e.g. File::open()), but that doesn’t apply here.



  • Let’s do this incrementally, shall we?

    First, let’s make get_files_in_dir() idiomatic. We will get back to errors later.

    fn get_files_in_dir(dir: &str) -> Option<Vec<PathBuf>> {
        fs::read_dir(dir)
            .ok()?
            .map(|res| res.map(|e| e.path()))
            .collect::<Result<Vec<_>, _>>()
            .ok()
    }
    

    Now, in read_parquet_dir(), if the unwraps stem from confidence that we will never get errors, then we can confidently ignore them (we will get back to the errors later).

    fn read_parquet_dir(entries: &Vec<String>) ->  impl Iterator<Item = record::Row> {
        // ignore all errors
        entries.iter()
            .cloned()
            .filter_map(|p| SerializedFileReader::try_from(p).ok())
            .flat_map(|r| r.into_iter())
            .filter_map(|r| r.ok())
    }
    

    Now, let’s go back to get_files_in_dir(), and not ignore errors.

    fn get_files_in_dir(dir: &str) -> Result<Vec<PathBuf>, io::Error>
    {
        fs::read_dir(dir)?
            .map(|res| res.map(|e| e.path()))
            .collect::<Result<Vec<_>, _>>()
    }
    
     
     fn main() -> Result<(), io::Error> {
         let args = Args::parse();
    -    let entries = match get_files_in_dir(&args.dir)
    -    {
    -        Some(entries) => entries,
    -        None => return Ok(())
    -    };
    -
    +    let entries = get_files_in_dir(&args.dir)?;
     
         let mut wtr = WriterBuilder::new().from_writer(io::stdout());
         for (idx, row) in read_parquet_dir(&entries.iter().map(|p| p.display().to_string()).collect()).enumerate() {
    
    

    Now, SerializedFileReader::try_from() is implemented for &Path, and PathBuf derefs to &Path. So your dance of converting to display then to string (which is lossy btw) is not needed.

    While we’re at it, let’s use a slice instead of &Vec<_> in the signature (clippy would tell you about this if you have it set up with rust-analyzer).

    
    fn read_parquet_dir(entries: &[PathBuf]) ->  impl Iterator<Item = record::Row> {
        // ignore all errors
        entries.iter()
            .filter_map(|p| SerializedFileReader::try_from(&**p).ok())
            .flat_map(|r| r.into_iter())
            .filter_map(|r| r.ok())
    }
    
         let entries = get_files_in_dir(&args.dir)?;
     
         let mut wtr = WriterBuilder::new().from_writer(io::stdout());
    -    for (idx, row) in read_parquet_dir(&entries.iter().map(|p| p.display().to_string()).collect()).enumerate() {
    +    for (idx, row) in read_parquet_dir(&entries).enumerate() {
             let values: Vec<String> = row.get_column_iter().map(|(_column, value)| value.to_string()).collect();
             if idx == 0 {
                 wtr.serialize(row.get_column_iter().map(|(column, _value)| column.to_string()).collect::<Vec<String>>())?;
    
    
    

    Now let’s see what we can do about not ignoring errors in read_parquet_dir().


    Approach 1: Save intermediate reader results

    This consumes all readers before getting further. So, it’s a behavioral change. The signature may also scare some people 😉

    fn read_parquet_dir(entries: &Vec<PathBuf>) ->  Result<impl Iterator<Item = Result<record::Row, ParquetError>>, ParquetError> {
        Ok(entries
            .iter()
            .map(|p| SerializedFileReader::try_from(&**p))
            .collect::<Result<Vec<_>, _>>()?
            .into_iter()
            .flat_map(|r| r.into_iter()))
    }
    

    Approach 2: Wrapper iterator type

    How can we combine errors from readers with flat record results?

    This is how.

    enum ErrorOrRows {
        Error(Option<ParquetError>),
        Rows(record::reader::RowIter<'static>)
    }
    
    impl Iterator for ErrorOrRows {
        type Item = Result<record::Row, ParquetError>;
    
        fn next(&mut self) -> Option<Self::Item> {
            match self {
                Self::Error(e_opt) => e_opt.take().map(Err),
                Self::Rows(row_iter) => row_iter.next(),
            }
        }
    }
    
    fn read_parquet_dir(entries: &[PathBuf]) ->  impl Iterator<Item = Result<record::Row, ParquetError>>
    {
        entries
            .iter()
            .flat_map(|p| match  SerializedFileReader::try_from(&**p) {
                Err(e) => ErrorOrRows::Error(Some(e)),
                Ok(sr) => ErrorOrRows::Rows(sr.into_iter()),
            })
    }
    
     
         let mut wtr = WriterBuilder::new().from_writer(io::stdout());
         for (idx, row) in read_parquet_dir(&entries).enumerate() {
    +        let row = row?;
             let values: Vec<String> = row.get_column_iter().map(|(_column, value)| value.to_string()).collect();
             if idx == 0 {
                 wtr.serialize(row.get_column_iter().map(|(column, _value)| column.to_string()).collect::<Vec<String>>())?;
    

    Approach 3 (bonus): Using unstable #![feature(gen_blocks)]

    fn read_parquet_dir(entries: &[PathBuf]) ->  impl Iterator<Item = Result<record::Row, ParquetError>> {
        gen move {
            for p in entries {
                match SerializedFileReader::try_from(&**p) {
                    Err(e) => yield Err(e),
                    Ok(sr) => for row_res in sr { yield row_res; }
                }
            }
        }
    }
    


  • As with all ads, especially M$ ones…
    No Code, Don’t Care

    At least if the code was available, I would find out what they mean by “spoofed Mime” and how that attack vector works (Is the actual file “magic” header spoofed, but the file still manages to get parsed with its non-“spoofed” actual format none the less?!, How?).

    Also, I would have figured out if this is a new use of “at scale” applied to purely client code, or if a service is actually involved.



  • If I understand what you’re asking…

    This leaves out some details/specifics out to simplify. But basically:

    async fn foo() {}
    
    // ^ this roughly desugars to
    
    fn foo() -> impl Future<()> {}
    

    This meant that you couldn’t just have (stable) async methods in traits, not because of async itself, but because you couldn’t use impl Trait in return positions in trait methods, in general.

    Box<dyn Future> was an unideal workaround (not zero-cost, and other dyn drawbacks). async_trait was a proc macro solution that generated code with that workaround. so Box<dyn Future> was never a desugaring done by the language/compiler.

    now that we have (stable) impl Trait in return positions in trait methods, all this dance is not strictly needed anymore, and hasn’t been needed for a while.



  • printf uses macros in its implementation.

    int
    __printf (const char *format, ...)
    {
      va_list arg;
      int done;
    
      va_start (arg, format);
      done = __vfprintf_internal (stdout, format, arg, 0);
      va_end (arg);
    
      return done;
    }
    

    ^ This is from glibc. Do you know what va_start and va_end are?

    to get features that I normally achieve through regular code in other languages.

    Derives expand to “regular code”. You can run cargo expand to see it. And I’m not sure how that’s an indication of “bare bone”-ness in any case.

    Such derives are actually using a cool trick, which is the fact that proc macros and traits have separate namespaces. so #[derive(Debug)] is using the proc macro named Debug which happens to generate “regular code” that implements the Debug trait. The proc macro named Debug and implemented trait Debug don’t point to the same thing, and don’t have to match name-wise.




  • (didn’t read OP, didn’t keep up with chimera recently)

    From the top of my head:
    The init system. Usable FreeBSD utils instead of busybox overridable by gnu utils (which you will have to do because the former are bare-bones). Everything is built with LLVM (not gcc). Extra hardening (utilizing LLVM). And it doesn’t perform like shit in some multi-threaded allocator-heavy loads because they patch musl directly with mimalloc. It also doesn’t pretend to have a stable/release channel (only rolling).

    So, the use of apk is not that relevant. “no GNU” is not really the case with Alpine. They do indeed have “musl” in common, but Chimera “fixes” one of the most relevant practical shortcomings of using it. And finally, I don’t think Chimera really targets fake “lightweight”-ness just for the sake of it.







  • The whole premise is wrong, since it’s based on the presumption of C++ and Rust being effectively generational siblings, with the C++ “designers” (charitable) having the option to take the Rust route (in the superficial narrow aspects covered), but choosing not to do so. When the reality is that C++ was the intellectual pollution product of “next C” and OOP overhype from that era (late 80’s/ early 90’s), resulting in the “C with classes” moniker.

    The lack of both history (and/or evolution) and paradigm talk is telling.



  • /me putting my Rust (post-v1.0 era) historian hat on.

    The list of (language-level) reasons why people liked Rust was already largely covered by the bullet points in the real original Rust website homepage, before some “community” people decided to nuke that website because they didn’t like the person who wrote these points (or rather, what that person was “becoming”). They tasked some faultless volunteers who didn’t even know much Rust to develop a new website, and then rushed it out. It was ugly. It lacked supposedly important components like internationalization, which the original site did. But what was important to those “community people” (not to be confused with the larger body of people who develop Rust and/or with Rust) is that the very much technically relevant bullet points were gone. And it was then, and only then, that useless meaningless “empowerment” speak came into the picture.