我到目前为止还不能理解的 Scala 光滑方法

我试图理解一些滑头的工作和它需要什么。

举个例子:

package models


case class Bar(id: Option[Int] = None, name: String)


object Bars extends Table[Bar]("bar") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)


// This is the primary key column
def name = column[String]("name")


// Every table needs a * projection with the same type as the table's type parameter
def * = id.? ~ name <>(Bar, Bar.unapply _)
}

有人能解释一下 *方法的用途吗? 什么是 <>,为什么是 unapply?什么是投影方法 ~’返回 Projection2的实例?

17035 次浏览

Since no one else has answered, this might help to get you started. I don't know Slick very well.

From the Slick documentation:

Lifted Embedding:

Every table requires a * method contatining a default projection. This describes what you get back when you return rows (in the form of a table object) from a query. Slick’s * projection does not have to match the one in the database. You can add new columns (e.g. with computed values) or omit some columns as you like. The non-lifted type corresponding to the * projection is given as a type parameter to Table. For simple, non-mapped tables, this will be a single column type or a tuple of column types.

In other words, slick needs to know how to deal with a row returned from the database. The method you defined uses their parser combinator functions to combine your column definitions into something that can be used on a row.

[UPDATE] - added (yet another) explanation on for comprehensions

  1. The * method:

    This returns the default projection - which is how you describe:

    'all the columns (or computed values) I am usually interested' in.

    Your table could have several fields; you only need a subset for your default projection. The default projection must match the type parameters of the table.

    Let's take it one at a time. Without the <> stuff, just the *:

    // First take: Only the Table Defintion, no case class:
    
    
    object Bars extends Table[(Int, String)]("bar") {
    def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
    def name = column[String]("name")
    
    
    def * = id ~ name // Note: Just a simple projection, not using .? etc
    }
    
    
    // Note that the case class 'Bar' is not to be found. This is
    // an example without it (with only the table definition)
    

    Just a table definition like that will let you make queries like:

    implicit val session: Session = // ... a db session obtained from somewhere
    
    
    // A simple select-all:
    val result = Query(Bars).list   // result is a List[(Int, String)]
    

    the default projection of (Int, String) leads to a List[(Int, String)] for simple queries such as these.

    // SELECT b.name, 1 FROM bars b WHERE b.id = 42;
    val q =
    for (b <- Bars if b.id === 42)
    yield (b.name ~ 1)
    // yield (b.name, 1) // this is also allowed:
    // tuples are lifted to the equivalent projection.
    

    What's the type of q? It is a Query with the projection (String, Int). When invoked, it returns a List of (String, Int) tuples as per the projection.

     val result: List[(String, Int)] = q.list
    

    In this case, you have defined the projection you want in the yield clause of the for comprehension.

  2. Now about <> and Bar.unapply.

    This provides what are called Mapped Projections.

    So far we've seen how slick allows you to express queries in Scala that return a projection of columns (or computed values); So when executing these queries you have to think of the result row of a query as a Scala tuple. The type of the tuple will match the Projection that is defined (by your for comprehension as in the previous example, of by the default * projection). This is why field1 ~ field2 returns a projection of Projection2[A, B] where A is the type of field1 and B is the type of field2.

    q.list.map {
    case (name, n) =>  // do something with name:String and n:Int
    }
    
    
    Queury(Bars).list.map {
    case (id, name) =>  // do something with id:Int and name:String
    }
    

    We're dealing with tuples, which may be cumbersome if we have too many columns. We'd like to think of results not as TupleN but rather some object with named fields.

    (id ~ name)  // A projection
    
    
    // Assuming you have a Bar case class:
    case class Bar(id: Int, name: String) // For now, using a plain Int instead
    // of Option[Int] - for simplicity
    
    
    (id ~ name <> (Bar, Bar.unapply _)) // A MAPPED projection
    
    
    // Which lets you do:
    Query(Bars).list.map ( b.name )
    // instead of
    // Query(Bars).list.map { case (_, name) => name }
    
    
    // Note that I use list.map instead of mapResult just for explanation's sake.
    

    How does this work? <> takes a projection Projection2[Int, String] and returns a mapped projection on the type Bar. The two arguments Bar, Bar.unapply _ tell slick how this (Int, String) projection must be mapped to a case class.

    This is a two-way mapping; Bar is the case class constructor, so that's the information needed to go from (id: Int, name: String) to a Bar. And unapply if you've guessed it, is for the reverse.

    Where does unapply come from? This is a standard Scala method available for any ordinary case class - just defining Bar gives you a Bar.unapply which is an extractor that can be used to get back the id and name that the Bar was built with:

    val bar1 = Bar(1, "one")
    // later
    val Bar(id, name) = bar1  // id will be an Int bound to 1,
    // name a String bound to "one"
    // Or in pattern matching
    val bars: List[Bar] = // gotten from somewhere
    val barNames = bars.map {
    case Bar(_, name) => name
    }
    
    
    val x = Bar.unapply(bar1)  // x is an Option[(String, Int)]
    

    So your default projection can be mapped to the case class you most expect to use:

    object Bars extends Table[Bar]("bar") {
    def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
    def name = column[String]("name")
    def * = id ~ name <>(Bar, Bar.unapply _)
    }
    

    Or you can even have it per-query:

    case class Baz(name: String, num: Int)
    
    
    // SELECT b.name, 1 FROM bars b WHERE b.id = 42;
    val q1 =
    for (b <- Bars if b.id === 42)
    yield (b.name ~ 1 <> (Baz, Baz.unapply _))
    

    Here the type of q1 is a Query with a mapped projection to Baz. When invoked, it returns a List of Baz objects:

     val result: List[Baz] = q1.list
    
  3. Finally, as an aside, the .? offers Option Lifting - the Scala way of dealing with values that may not be.

     (id ~ name)   // Projection2[Int, String] // this is just for illustration
    (id.? ~ name) // Projection2[Option[Int], String]
    

    Which, wrapping up, will work nicely with your original definition of Bar:

    case class Bar(id: Option[Int] = None, name: String)
    
    
    // SELECT b.id, b.name FROM bars b WHERE b.id = 42;
    val q0 =
    for (b <- Bars if b.id === 42)
    yield (b.id.? ~ b.name <> (Bar, Bar.unapply _))
    
    
    
    
    q0.list // returns a List[Bar]
    
  4. In response to the comment on how Slick uses for comprehensions:

    Somehow, monads always manage to show up and demand to be part of the explanation...

    For comprehensions are not specific to collections only. They may be used on any kind of Monad, and collections are just one of the many kinds of monad types available in Scala.

    But as collections are familiar, they make a good starting point for an explanation:

    val ns = 1 to 100 toList; // Lists for familiarity
    val result =
    for { i <- ns if i*i % 2 == 0 }
    yield (i*i)
    // result is a List[Int], List(4, 16, 36, ...)
    

    In Scala, a for comprehension is syntactic sugar for method (possibly nested) method calls: The above code is (more or less) equivalent to:

    ns.filter(i => i*i % 2 == 0).map(i => i*i)
    

    Basically, anything with filter, map, flatMap methods (in other words, a Monad) can be used in a for comprehension in place of ns. A good example is the Option monad. Here's the previous example where the same for statement works on both the List as well as Option monads:

    // (1)
    val result =
    for {
    i <- ns          // ns is a List monad
    i2 <- Some(i*i)  // Some(i*i) is Option
    if i2 % 2 == 0 // filter
    } yield i2
    
    
    // Slightly more contrived example:
    def evenSqr(n: Int) = { // return the square of a number
    val sqr = n*n         // only when the square is even
    if (sqr % 2 == 0) Some (sqr)
    else None
    }
    
    
    // (2)
    result =
    for {
    i <- ns
    i2 <- evenSqr(i) // i2 may/maynot be defined for i!
    } yield i2
    

    In the last example, the transformation would perhaps look like this:

    // 1st example
    val result =
    ns.flatMap(i => Some(i*i)).filter(i2 => i2 %2 ==0)
    
    
    // Or for the 2nd example
    result =
    ns.flatMap(i => evenSqr(i))
    

    In Slick, queries are monadic - they are just objects with the map, flatMap and filter methods. So the for comprehension (shown in the explanation of the * method) just translates to:

    val q =
    Query(Bars).filter(b => b.id === 42).map(b => b.name ~ 1)
    // Type of q is Query[(String, Int)]
    
    
    val r: List[(String, Int)] = q.list // Actually run the query
    

    As you can see, flatMap, map and filter are used to generate a Query by the repeated transformation of Query(Bars) with each invocation of filter and map. In the case of collections these methods actually iterate and filter the collection but in Slick they are used to generate SQL. More details here: How does Scala Slick translate Scala code into JDBC?